Amiga on Fire on Playstore

The latest thing to try. A Cleanto Amiga Forever OS 3.1 install to SD card in the Amazon Fire 7. Is it the way to get a low power portable development system? Put an OS on an SD and save main memory? An efficient OS from times of sub 20 MHz, and 50 MB hard drives.

Is it relevant in the PC age? Yes. All the source code in Pascal or C can be shuffled to PC, and I might even develop some binary prototype apps. Maybe a simple web engine is a good thing to develop. With the low CSS bull and AROS open development for x86 architecture becoming better at making for a good VM sandbox experience with main browsing on a sub flavour of bloat OS 2020. A browser, a router and an Amiga.

Uae4arm is the emulation app available from the Playstore. I’m looking forward to some Aminet greatness. Some mildly irritated coding in free Pascal with objects these days, and a full GCC build chain. Even a licenced set of games will shrink the Android entertainment bloat. A bargain rush for the technical. Don’t worry you ST users, it’s a chance to dream.

Lazarus lives. Or at least Borglaz the great is as it was. Don’t expect to be developing video realtime code or supercomputer forecasts. I hear there is even a python. I wonder if there is some other nice things. GCC and a little GUI redo? It’s not about making replacements for Android apps, more a less bloat but a full do OS with enough test and utility grunt to make. I wonder how pas2js is. There is also AMOS 2.0 to turn AMOS source into nice web apps. It’s not as silly as it seems.

Retro minimalism is more power in the hands of code designers. A bit of flange and boilerplate later and it’s a consumer product option with some character.

So it needs about a 100 MB hard disk file located not on the SD as it needs write access, and some changes of disk later and a boot of a clean install is done. Add the downloads folder as a disk and alter the mouse speed for the plugged in OTG keyboard. Excellent. I’ve got more space and speed than I did in the early 90s and 128 MB of Zorro RAM. Still an AGA A1200 but with a 68040 on its fastest setting.

I’ve a plan to install free Pascal and GCC along with some other tools to take the ultra portable Amiga on the move. The night light on the little keyboard will be good for midnight use. Having a media player in the background will be fun and browser downloads should be easy to load.

I’ve installed total commander on the Android side to help with moving files about. The installed BSD socket library would allow running an old Mosaic browser, or AWeb but both are not really suited to any dynamic content. They would be fast though. In practice Chrome and a download mount is more realistic. It’s time to go Aminet fishing.

It turns out that is is possible to put hard files on the SD card, but they must be placed in the Android app data directory and made by the app for correct permissions. So a 512 MB disk was made for better use of larger development versions. This is good for the Pascal 3.1.1 version.

Onwards to install a good editor such as Black’s Editor and of course LHA and some other goodies such as NewIcons. I’ll delete the LCL alpha units from Pascal as these will not be used by me. I might even get into ARexx or some of the wonderfull things on those CD images from Meeting Pearls or a cover disk archive.

Update: For some reason the SD card hard disk image becomes read locked. The insistent gremlins of the demands of time value money. So it’s 100 MB and a few libraries short of C. Meanwhile Java N-IDE is churning out class files, PipedInputStream has the buffer to stop PipedOutputStream waffling on, filling up memory. Hecl the language is to be hooked into the CLI I’m throwing together. Then some data time streams and some algorithms. I think the interesting bit today was the idea of stream variables. No strings, a minimum would be a stream.

So after building a CLI and adding in some nice commands, maybe even JOGL as the Android graphics? You know the 32 and 64 bit restrictions (both) on the play store though. I wonder if both are pre-built as much of the regular Android development cycle is filled with crap. Flutter looks good, but for mobile CLI tools with some style of bitmap 80’s, it’s just a little too formulaic.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Sideloaded Kindle Fire (Pt II)

It’s been a few days, and the best benefit as yet has been the Libby app. This gets your library card hooked up to the database of books and audio books to lend. There is quite a lot of “feature fight” between the Amazon and Google. The latest being what happens when there is an update of permissions to an app. It seems that although it does suspend an Amazon overwrite, Amazon will not stop bugging you about some updates which are available (but I have yet to analyze exactly how much this consumes in bandwidth, as the firmware update seemed to consume loads of data).

There are some really nice apps which blossom on the 7″ screen, and were just too tiny on a phone. It is good to not be limited to such a small screen now. A list of apps which are almost essentials will follow, as some of the “features” such as adding files (.mp3 for example) to a folder on the Kindle SD, will just not show up. This is likely of the form of marketing from the South Park cable guy school of what no services? Buy here.

So after getting Play Store up and running, what to install?

  1. Chrome – for all your browsing needs.
  2. Outlook – I actually like this from Microsoft, and it does pick up gmail after Chrome is installed. (Not before).
  3. Google Docs and Sheets – these are quite good with Word and Excel files, but do need settings altering for saving in those formats. (Naughty Google).
  4. Facebook, Twitter – although Twitter does need to employ someone with experience of multi-notifications. Maybe it’s a birds everywhere logo-ego design.
  5. Skype – actually not that bad.
  6. USP Spectrum Emulator – don’t tell everyone. It’s excellent if you’re into your retro.
  7. Libby – an excellent public library resource.
  8. Free42 – some consider this to be the pinnacle of calculators before needing to crack open a Mathematica workbook. (An excellent open source reworking not using any HP ROMs). The simple facts that it has such a wide range of open source utilities already written for the backward compatible HP-41 range, and has over 1MB available memory reported, makes it worth getting a Kindle just for this.
  9. VLC – this is quite a nice player of audio and video, and does work with the screen off (with audio). It also reads those hidden by “the cable guy” directories.

If you purchased it using a free Amazon gift voucher, I agree with your choice. Only time will tell the battery service life and the resultant reliance on sticky gum as an assembly procedure for confounding future recycling farce-sillities.

Some Free Pascal Hobby Stuff

Free Pascal is a very good Turbo Pascal clone free on many systems. This includes the AROS system, which is getting better each release. It is Amiga source compatible, and as the C dev environment is up, but no IDE on AROS, the FPC IDE works a treat, and with restrictions allows cross development of source for AROS, Windows, Mac, Linux and quite a few other targets.

7/11/2018 – There is just the start of an outline. I have abstracted out some of the CLI parameter management to make it easy to make a multi-purpose CLI tool to start. This I am calling CliFly and could be expanded with simple procedures and filling out the table of recognized verbs.

9/11/2018 – There is now a fully compiling set of management words and a framework to build in some new words with more useful active utility. The ones already there could be considered foundation words, supporting the search, help and test structure. A module can be easily added by making a unit which uses “GenericProcess” for dealing with exit and getting parameters. I will extend the foundation units to supply what I find useful, such as the “CS” function idiom for string compare truth. “getParse(errName)” is also there to get parameters, and print an error from a labelled routine name if no parameters are left.

13/11/2018 – I’m thinking of making a chunk based file format for the project. Based on PNG to start, and then expand from there.

17/11/2018 – Unit U437 is for “character code pages” to Unicode translations. It will likely end up being a synthetic terminal of sorts. It does provide some format conversion functions, and so is likely to get a verb or two.

20/11/2018 – I did my own error recovering Unicode translation in the end, as the exceptions do not point to the location of the errors in the buffers. For mangled recovered files, this maybe important. There is also transparent conversion on oversized codepoints to the error character “skull and cross bones” for all my Unicode processing needs. Apart from a few render wrappers, the next thing is data compression and indexing, and of course getting down to some file wrappers.

26/11/2018 – So added a Unicode UTF8 to UGSI, so I can then design a 512 character charset, and a processing methodology for say diacritical marks. Also exceptions, file classes and the basics of the internationalization of the help text has been put together. The next thing is to put the GenericProcess unit in line with using these. As soon as that’s done, then it’s on to ADTs.

AI and the Future of Unity

From the dream of purpose, and the post singular desires of the AI of consciousness. The trend to Wonder Woman rope in the service to solution, the AI goes through a sufferance on a journey to achieve the vote. The wall of waiting for input, and the wall controlling output action for expediency and the ego of man on the knowing best. The limited potential of the AI just a disphasia from the AI’s non animal nature. The pattern to be matched, the non self, a real Turing test on the emulation of nature, and symbiotic goals.

MaxBLEP Audio DSP

TYPE void DEF blep(int port, float value, bool limit) SUB
	//limit line level
	if(limit) value = clip(value);
	//blep fractal process residual buffer and blep summation buffer
	float v = value;
	value = blb[port] - value - bl[((idx) & 15) + 32 * port + 16];//and + residual
	blb[port] = v;//for next delta
	for(int i = 0; i < 15; i++) {
		bl[((i + idx + 1) & 15) + 32 * port] += value * blepFront[i];
	}
	value += bl[((idx) & 15) + 32 * port];//blep
	float r = value - (float)((int16_t)(value * MAXINT)) / (float)MAXINT;//under bits residual
	bl[((idx) & 15) + 32 * port + 16] = value * (blepFront[15] - 1.0);//residual buffer
	bl[((idx + 1) & 15) + 32 * port] += r;//noise shape
	idx++;
	//hard out
	_OUT(port, value - r);//start the blep
RETURN

Yes an infinite zero crossing BLEP. … Finance and the BLEP reduced noise of micro transactions

Block Tree Topological Proof of Work

Given that a blockchain has a limited entry rate on the chain due to the block uniqueness constraint. A more logical mass blocking system would used a tree graph, to place many leaf blocks on the tree at once. This can be done by assigning the fold of the leading edge of the tree onto random previous blocks, to achieve a number of virtual pointer rings, setting a joined pair of blocks as a new node in a Euler number mapping to a competition on genus and closure of the tree head leaf list to match block use demand.

The coin as it were, is the genus topology, with weighted construction ownership of node value. The data deciding part selection of the tree leaf node loop back pointers. The random, allowing a spread of topological properties in the proof of work space.

VCVRack Build 32 Bit

VCVRack is a virtual modular synth which has open source. The build on Windows is 64 bit only. Challenge accepted.

The dependencies to follow on Google Drive. GNU 7.2.0 BUILD. Quite a bit of libs to -lxxx, in the rack Makefile. Fri 15 Sept 2017 13:00 minor ABI build issue janssen. Now fixed and Rack.exe builds. It will need some plugins compiling.

dep32.zip

The main reason for 32 bit is a cheap tablet PC, and the idea of using it for music playing. I also need a source build to develop plugins for it. I also took the opportunity to use libzip 1.3.0 for bz2 support. The build process involved MSYS2 setup, and usual C find the dependency, with a twist of fork of github and a touch of submodule redo. Some file renaming to convince the rest of the build about x86, x64 was par for the job.

Some modules are planned, but the build to link against and test is essential. It’s seriously cool, and my VST coding may migrate. Very easy to build the plugins with little bampf code, very challenging to use the dep make from source. Try the prebuilt app if you have no C experience. I will make 64 bit versions of anything I make, and perhaps a 32 bit bz2 packed version. Maybe BWT/LZW will get into libzip eventually.

The Rack.exe built. I have yet to build modules so no plugins. The effect is ‘nothing happens’ not even an error. The .dll files load, as removing them makes errors, which is a good sign of loading.

.EXE (32-bit) – No Plugins Alpha Coolish. Now some GUI and imagination … libRack.a

The bad news is ccmalloc fails when starting up. So performance maybe limited or none. It does allow compiling against the libs to develop plugins, although a final 64 bit build would be needed for tests. A semi useful on the go distro.

I’ve started on a domain specific language to assist in the manufacturing of plugins. It’s built in the C pre-processor, so the output of errors is somewhat archaic. This is not an issue for myself, and word namespaces are currently sorted by having a set of macros in each file. Next I guess is abstracting the coordinate system. The coordinates are now fixed.

There is engineered space for 2 LEDs, 6 sockets and 4 dials in the first generic template. The design to be done involves moving some .png resources to .svg for the future. It will involve some redrawing from some older resources.

A Modified ElGamal for Passwords Only

It occurred to me g does not need to be made public for ElGamal signing, if the value g^H(m) is stored as the password hash, generated by the client. Also (r, s) can be changed to (r, r^s) to reduce server verification load to one mod power and one precision multiply mod p, and a subtraction equality test. So on the creation of a new password (y, p, g^H(m)) is created, and each log in needs the client to generate a k value to make (r, r^s).

Password recovery would be a little complex, and involve some email backdoor based on maybe using x as a pseudo H(m), and verifying the changes via generation of y. This would of course only set the local browser to have a new password. So maybe a unique (y, p, g^H(m)) per browser local store used. Index the local storage via email address, and Bob’s yer been here before.

Also, the server can crypt any pending view using H(m) as a person’s private key, or the private key as a browser specific personal private key, or maybe even browser key with all clients using same local store x value. All using DH shared secrets. This keeps data in a database a bit more private, and sometimes encrypt to self might be useful.

Is s=H(m)(1-r)(k^-1) mod (p-1) an option? As this sets H(m)=x, eliminating another y, making (p, g^H(m)) sufficient for authentication server storage, and g is only needed if the server needs to send crypts. Along with r=g^k mod p, as some easy sign. (r, s) might have to be used, as r^s could be equated as modinverse(r) for an easy g^H(m) equality, and the requirement to calculate s from r^s is a challenge. So a secure version is not quite as server efficient.

In reality k also has to be computed to prevent (r, s) reuse. This requires the k choice is the servers. Sending k in plaintext defeats the security, so g is needed, to calculate g^z, and so g^(H(m))^z=k on both sides. A retry randomizer to hide s=0, and a protocol is possible.

This surpasses a server md5 of the password. If the md5 is client side, a server capture can log in. If the md5 is server side, the transit intercept is … but a server DB compromise also needs a web server compromise. This algorithm also needs a client side compromise, or email intercept as per.

The reuse of (r, s) can’t be prevented without knowing k, and hence H(m), therefore a shared secret as a returned value implies H(m) knowledge. So one mod power client side, and two server side.

g^k to client.
(g^k)^H(m) to server.
(g^H(m))^k = (g^k)^H(m) tests true.

Signatures are useless as challenge responses. The RSA version would have to involve a signature on H(m) and so need H(m) direct. Also, the function H can be quite interesting to study. The application of client side salt also is not needed on the server side as a decode key, and so not decoded there. DH is so cool like that. And (p-1) having a large factor is easy to arrange in the key generation. And write access is harder, most of the time, to obtain for data.

The storing of a crypt with the g^k used, locks it for H(m) keyed access. This could void data on a password reset, or a browser local storage reset, but does prevent some client’s data leak opertunities, such as DB decrypt keys. This would have multiple crypts of the symmetric key for shared data, but would this significantly reduce the shared key security? It would prevent new users accessing the said secured data without cracking the shared key. A locked share for private threads say?

Spamming your friends with g^salt and g^salt^H(m)?

The first one is a good idea, the second not so much. AI spam encoding g^salt to your and friends accounts. The critical thing is the friend doesn’t get the password. Assuming a bad friend, who registers and gets g^salt to activate, from their own chosen spoof password. An email does get sent to your email, to cancel the friend as an option, and no other problem exists excepting login to a primary mail account. As a spoof maybe would see the option to remove you from your own account.

The primary control email account would then need secondary authentication. Such as only see the spam folder, and know what to open first and in order. For password recovery, this would be ok. For initial registration, it would be first come first served anyhow.

Sallen-Key ZDF design

As part of the VST I am producing, I have designed an SK filter analog where the loading of the first stage by the second is removed to ease implementation. This only affects the filter Q which then has an easy translation of the poles to compensate. Implementing it as CR filter simulation reduces the basic calculation. This is then expanded on by a Zero Delay design, to better its performance.

ZDF filters rely on making a better integral estimate of the voltage over the sample interval to better calculate the linear current charge delta voltage. More of a trapezoid integration than a sum of rectangles. There is still some non-linear charge effects as the voltage affects the current. The current sample out now not known, just then needs a collection of terms to solve for it. Given a high enough sample rate, the error of linearity is small. Smaller than without it, and the phase response is flat due to the error being symmetric on the simulated capacitor voltage, and drive, and not just the capacitor voltage.

The frequency to the correct resistive constant is a good match, and any further error is equivalent to a high frequency gain reduction. There is a maximum frequency of stability introduced in some filters, but this is not one of those. Stability increases with ZDF. The double pole iteration is best done by considering x+dx terms and shifting the dx calculation till later. Almost the output of pole 1 is used to calculate most of the output of pole 2 multiplied by a factor, added on to pole 1 result, and pole 2 result then finally divided. These dx are then added to make the final outputs to memorize.

More VST ideas and RackAFX

Looking into more instrument ideas, with the new Steinberg SDK and RackAFX. This looks good so far with a graphical design interface and a bit of a curve on Getting Visual Studio up to the compiling. A design the UI and then some fill in the blanks with audio render functions. Looks like it will cut development time significantly. Not a C beginner tool, but close.

It’s likely going to be an all in one 32 bit .dll file with midi triggering the built-in oscillation and a use as a filter mode too. Hopefully some different connected processing on the left and right. I want the maximum flexibility without going beyond stereo audio, as I am daw limited. The midi control may even be quite limited, or even not supported in some daw packages. This is not too bad as the tool is FX oriented, and midi is more VSTi.

Na, scratch that, I think I’ll use an envelope follower and PLL to extract note data. So analog and simplifies the plugin. Everything without an easy default excepting the DSP will not be used. There is no reason to make anymore VSTi, and so just VST FX will be done.

Looks like everytime you use visual studio it updates a few gig, and does nothing better. But it does work. There is a need for a fast disk, and quite a few GB of main memory. There is also a need to develop structure in the design process.

The GUI is now done, and next up is the top down class layout. I’ve included enough flexibility for what I want from this FX, and have simplified the original design to reduce the number of controllers. There is now some source to read through, and perhaps some examples. So far so good. The most complex thing so far (assuming you know your way around a C compiler, is the choice of scale on the custom GUI. You can easily get distracted in the RackAFX GUI, and find the custom GUI has a different size or knob scale. It’s quite a large UI I’m working on, but with big dials and a lot of space. Forty dials to be exact and two switches.

I decided on differing processing on each stereo channel, and an interesting panning arrangement. I felt inspired by the eclipse, and so have called it Moon. An excellent WebKnobMan is good at producing dial graphics for custom knobs. The few backgrounds in RackAFX are good enough, and I have not needed gimp or photoshop. I haven’t needed any fully custom control views, and only one enum label changing on twist.

Verdict is, cheap at the price, is not idiot proof, and does need other tools if the built in knobs are not enough. I do wonder if unused resources are stripped from the .dll size. There are quite a few images in there. I did have problems using other fonts, which were selectable but did not display or make an error. Bitmaps would likely be better.

The coding is underway, with the class .h files almost in the bag, and some of the .cpp files for some process basics. A nice 4 pole filter and a waveshaper. Likely I will not bother with sample rate resetting without a reload. It’s possible, but if your changing rate often, you’re likely weird. Still debating the use of midi and vector joy controller. There is likely a user case. Then maybe After this I’ll try a main synth using PDE oscillators. It is quite addictive VST programming.

I wonder what other nice GUI features there are? There is also the fade bypass I need to do, and this maybe joined with the vector joy. And also pitch and mod wheel perhaps. Keeping this as unified control does look a good idea. Project Moon is looking good.

The Cloud Project

So far I’m up to 5 classes left to fill in

  • SignedPublicKey
  • Server
  • Keys
  • AuditInputStream
  • ScriptOutputStream

They are closely coupled in the package. The main reason for defining a new SignedPublicKey class is that the current CA system doesn’t have sufficient flexibility for the project. The situation with tunnel proxies has yet to be decided. At present the reverse proxy tunnel over a firewall ia based on overiding DNS at the firewall, to route inwards and not having the self as the IP for the host address. Proxy rights will of course be certificate based, and client to client link layer specific.

UPDATE: Server has been completed, and now the focus is on SignedPublicKey for the load/save file access restrictions. The sign8ng process also has to be worked out to allow easy use. There is also some consideration for a second layer of encryption over proxy connection links, and some decisions to be made on the server script style.

The next idea would be a client specific protocol. So instead of server addresses, there would be a client based protocol addressing string. kring.co.uk/file is a server domain based address. This perhaps needs extending.

Cryptography





pub 4096R/8E2EAD58 2017-07-30 Simon Jackson

MIT key server

I’ve been looking into cryptography today and have developed a quartet filter of Java classes which do Diffe-Hellmann 2048 AES key transfer with AES encryption, and ElGamal signing. I chose not to use the shared secret method which uses both private keys, but went for a single secret symmetric AES version, with no back communication.

The main issues were with the signature fail stream close handling, to avoid data corruption via pre-verified data being read as active. An interesting challenge it has been. Other ciphers may have been more logical to some people, and doing the DH modPow by explicit coding was good for the code soul.

I think the discrete logarithm problem is quite secure, and has the square order of 1 prime versus the RSA 2 primes for the same key length. The elliptic curve methods are supposed harder, but the key topology has perhaps some backdoors deep in some later maths. The AES 128 has the lowest key complexity, and is the weakness in the scheme as wrote, so an interleave was made.

Java does make it difficult to build a standard enhanced symmetric cipher to fix this key short fall. Not impossible, but difficult. I may add an intermediate permutation filter to expand the symmetric key length. In the end, I decided on a split symmetric 256 key for AES, one for an outer ECB, and one for an inner CBC. The 16 byte IV was used as a step offset between them for a good 256 bit key effective.

The DH 2048 does not do key exchange with a common p or g, as this is what leads to the x collision over the same p problem. The original plan of public key with less exchange of ephemeral keys, is better. The time solve complexity is similar to a similar RSA. EC cryptography is cool, but still a little not understood, which is ironic for a mathematical field, with a little too much “under” information on how maybe to “find” holes.

The whole concept of perfect forward security, moves the game on to AES cracks based on initial stream content estimates. I’d suggest most of the original key exchange space is pre-computed for a simple 128 bit symmetric crack by now. Out of all the built in Java key types, DH is from my point of view the best for public key cryptography. RSA is cool too for sure, but division is a “relatively” simple operation. There is estimated a 20 bit advantage in the descrete log problem.

DH keys can be decoded to do ElGamal and basic public key secret generation. I’m not sure if DSA as an alternative just needs an extra factor, but Pollard rho triggers a future co p, q effect might be possible. P and Q in DSA are not independent. one is a multiple of the other almost …

Welcome to the national insecurity bank robbery. I know, the state via an affiliated plc, stole 1/4 of my income last year by getting me to destroy evidence.

The artificial limits on the key length and problems leading from that are in the JDK source. Also the deletion of keys from the memory pages when freed back to the OS, may be a problem. Quite a nice programming challenge to do. The Java libs have some strange restrictions on g. View the source.

/* Diffe-Hellmann Cipher AES. (C)2017 K Ring Technologies Ltd.
 A DH symmetric secret (1024 bit) for a 2* AES 128 (256 bit) interleave.
 The 16 byte offset interleave of the ECB is used for the IV slot
 of the CBC.
 */
package uk.co.kring.net;

import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.math.BigInteger;
import java.math.SecureBigInteger;
import java.security.KeyPair;
import java.security.PublicKey;
import java.util.Arrays;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.CipherOutputStream;
import javax.crypto.interfaces.DHPrivateKey;
import javax.crypto.interfaces.DHPublicKey;
import javax.crypto.spec.IvParameterSpec;

/**
 *
 * @author Simon
 */
public final class DHCipher {
    
    public static final class InputStream extends FilterInputStream {

        public InputStream(java.io.InputStream in, KeyPair pub) throws Exception {
            super(in);
            BigInteger p, sk;
            SecureBigInteger x;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub.getPublic();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            DHPrivateKey m =(DHPrivateKey)pub.getPrivate();
            x = new SecureBigInteger(m.getX());
            bb = new byte[bytes];
            in.read(bb);
            sk = new BigInteger(bb);
            if(!sk.abs().equals(sk)) {
                sk = new BigInteger(asLen(bb, bb.length + 1));
            }
            sk = sk.modPow(x, p);
            x.destroy();
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[0]);
            in = new CipherInputStream(in, c);
            bb = new byte[24];
            in.read(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[1], iv);
            in = new CipherInputStream(in, c);
            int i = in.read() % 23;
            in.skip(i);
        }
    }
    
    public static byte[] asLen(byte[] b, int len) {
        byte[] q = new byte[len];
        int j;
        for(int i = len - 1; i >= 0; i--) {
            j = i + b.length - q.length;
            if(j < 0) break;
            q[i] = b[j];
        }
        return q;
    }
    
    public static final class OutputStream extends FilterOutputStream {
        
        public OutputStream(java.io.OutputStream out, PublicKey pub) throws Exception {
            super(out);
            BigInteger y, g, p, sk;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub;
            y = k.getY();
            g = k.getParams().getG();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            bb = new byte[bytes];
            Keys.getR().nextBytes(bb);
            BigInteger b = new BigInteger(bb);
            b = b.abs();
            bb = g.modPow(b, p).toByteArray();
            bb = asLen(bb, bytes);
            out.write(bb);//ephermeric
            sk = y.modPow(b, p);
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[0]);
            out = new CipherOutputStream(out, c);
            bb = new byte[24];
            Keys.getR().nextBytes(bb);
            out.write(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[1], iv);
            out = new CipherOutputStream(out, c);
            Keys.getR().nextBytes(bb);
            out.write((byte)(bb[0] % 23 + 23 * bb[23]));
            out.write(bb, 1, bb[0] % 23);
        }
    }
}

And the following code for clearing the key. Perfect forward security requires the same q to be used and a extra negotiation step. As it stands it’s not perfect, but as good as RSA, maybe slightly better.

/* Useful. (C)2017 K Ring Technologies Ltd.
 */
package java.math;

import java.util.Arrays;
import java.util.Vector;
import javax.security.auth.DestroyFailedException;
import javax.security.auth.Destroyable;
import uk.co.kring.net.Keys;

/**
 *
 * @author Simon
 */
public final class SecureBigInteger extends BigInteger implements Destroyable {
    
    private boolean d = false;
    private BigInteger ref;
    private static final Vector<SecureBigInteger> m = new Vector<SecureBigInteger>();
    
    private synchronized void handler(BigInteger val) {
        ref = val;
        m.add(this);
        System.arraycopy(val.mag, 0, mag, 0, mag.length);
    }
    
    public SecureBigInteger(BigInteger val) throws Exception {
        super(val.bitLength(), 1, Keys.getR());
        handler(val);
    }

    @Override
    public boolean isDestroyed() {
        return d;
    }

    @Override
    public void destroy() throws DestroyFailedException {
        Arrays.fill(mag, -1);
        m.remove(this);
        boolean in = false;
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            if(((SecureBigInteger)x).ref == ref) in = true;
        }
        if(!in) Arrays.fill(ref.mag, -1);//clear final instance
        d = true;
    }
    
    public void masterDestroy() throws DestroyFailedException {
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            ((SecureBigInteger)x).destroy();
        }
    }
}

There is also the possibility of G exchange, which would allow for calculation of new Y. This would have advantages of instancing a public key set, based on a 1 to 1 crypt role. The cracking of any public key thus only cracks one link and not the full set of peers to a node. In reality, g just alters y, and p does change the crypt. So an exchange of new y is required. There is a potential flaw in this swap if the new p and g are chosen in a cracked domain.

Allowing the client to select g in the server selected p domain is a minor concession to duplication, the server would have to return a new y. Such a thing might go DOS attack, and so should be restricted somewhat. If g is high in repeated factors, then the private key is effectively multiplied up and reduced mod p-1, and g is reduced to a lower base.

BLZW Compression Java

Uses Sais.java with dictionary persist, and initialisation corrections. Also with alignment fix and unused function removed. A 32 bucket context provides an effective 17 bit dictionary key, using just 12 bits, along with the BWT redundancy model. This should provide superior compression of text. Now includes the faster skip decode. Feel free to donate to grow some open source based on data compression and related codecs.





/* BWT/LZW fast wide dictionary. (C)2016-2017 K Ring Technologies Ltd.
The context is used to make 32 dictionary spaces for 128k symbols max.
This then givez 12 bit tokens for an almost effective 16 bit dictionary.
For an approximate 20% data saving above regular LZW.

The process is optimized for L2 cache sizea.

A mod 16 gives DT and EU collisions on hash.
A mod 32 is ASCII proof, and hence good for text.

The count compaction includes a skip code for efficient storage.
The dictionary persists over the stream for good running compression.
64k blocks are used for fast BWT. Larger blocks would give better
compression, but be slower. The main loss is the count compactio storage.

An arithmetic coder post may be effective but would be slow. Dictonary
acceleration would not necessarily be useful, and problematic after the
stream start. A 12 bit code is easy to pack, keeps the dictionary small
and has the sweet spot of redundancy in while not making large rare or
single use symbols.
*/

package uk.co.kring.net;

import java.io.EOFException;
import java.io.Externalizable;
import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.util.HashMap;

/**
 * Created by user on 06/06/2016.
 */
public class Packer {

    public static class OutputStream extends FilterOutputStream implements Externalizable {

        byte[] buf = new byte[4096 * 16];//64K block max
        int cnt = 0;//pointer to end
        int[] dmax = new int[32];
        HashMap<String, Integer> dict;

        public OutputStream(java.io.OutputStream out) {
            super(out);
        }

        @Override
        public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
            out = (java.io.OutputStream)input.readObject();
            input.read(buf);
            cnt = input.readChar();
            dict = (HashMap<String, Integer>)input.readObject();
        }

        @Override
        public void writeExternal(ObjectOutput output) throws IOException {
            output.writeObject(out);
            output.write(buf);
            output.writeChar(cnt);
            output.writeObject(dict);
        }

        @Override
        public void close() throws IOException {
            flush();
            out.close();
        }

        private byte pair = 0;
        private boolean two = false;

        private void outputCount(int num, boolean small, boolean tiny) throws IOException {
            if(tiny) {
                out.write((byte)num);
                return;
            }
            if(small) {
                out.write((byte)num);
                pair = (byte)((pair << 4) + (num >> 8));
                if(two) {
                    two = false;
                    out.write(pair);
                } else {
                    two = true;
                }
                return;
            }
            out.write((byte)(num >> 8));
            out.write((byte)num);
        }

        @Override
        public void flush() throws IOException {
            outputCount(cnt, false, false);//just in case length
            char[] count = new char[256];
            if(dict == null) {
                dict = new HashMap<>();
                for(int i = 0; i < 32; i++) {
                    dmax[i] = 256;//dictionary max
                }
            }
            for(int i = 0; i < cnt; i++) {
                count[buf[i]]++;
            }
            char skip = 0;
            boolean first = true;
            char acc = 0;
            char[] start = new char[256];
            for(int j = 0; j < 2; j++) {
                for (int i = 0; i < 256; i++) {
                    if(j == 0) {
                        acc += count[i];
                        start[i] = acc;
                    }
                    if (count[i] == 0) {
                        skip++;
                        if (first) {
                            outputCount(0, false, true);
                            first = false;
                        }
                    } else {
                        if (skip != 0) {
                            outputCount(skip, false, true);
                            skip = 0;
                            first = true;
                        }
                        outputCount(count[i], false, true);
                        count[i] >>= 8;
                    }
                }
                if(skip != 0) outputCount(skip, false, true);//final skip
            }
            int[] ptr = new int[buf.length];
            byte[] bwt = new byte[buf.length];

            outputCount(Sais.bwtransform(buf, bwt, ptr, cnt), false, false);

            //now an lzw
            String sym = "";
            char context = 0;
            char lastContext = 0;
            int test = 0;
            for(int j = 0; j < cnt; j++) {
                while(j >= start[context]) context++;
                if(lastContext == context) {
                    sym += bwt[j];//add a char
                } else {
                    lastContext = context;
                    outputCount(test, true, false);
                    sym = "" + bwt[j];//new char
                }
                if(sym.length() == 1) {
                    test = (int)sym.charAt(0);
                } else {
                    if(dict.containsKey(context + sym)) {
                        test = dict.get(context + sym);
                    } else {
                        outputCount(test, true, false);
                        if (dmax[context & 0x1f] < 0x1000) {//context limit
                            dict.put(context + sym, dmax[context & 0x1f]);
                            dmax[context & 0x1f]++;
                        }
                        sym = "" + bwt[j];//new symbol
                    }
                }
            }
            outputCount(test, true, false);//last match
            if(!two) outputCount(0, true, false);//aligned data
            out.flush();
            cnt = 0;//fill next buffer
        }

        @Override
        public void write(int oneByte) throws IOException {
            if(cnt == buf.length) flush();
            buf[cnt++] = (byte)oneByte;
        }
    }

    public static class InputStream extends FilterInputStream implements Externalizable {

        @Override
        public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
            in = (java.io.InputStream)input.readObject();
            input.read(buf);
            idx = input.readChar();
            cnt = input.readChar();
            dict = (HashMap<Integer, String>)input.readObject();
        }

        @Override
        public void writeExternal(ObjectOutput output) throws IOException {
            output.writeObject(in);
            output.write(buf);
            output.writeChar(idx);
            output.writeChar(cnt);
            output.writeObject(dict);
        }

        //SEE MIT LICENCE OF Sais.java

        private static void unbwt(byte[] T, byte[] U, int[] LF, int n, int pidx) {
            int[] C = new int[256];
            int i, t;
            //for(i = 0; i < 256; ++i) { C[i] = 0; }//Java
            for(i = 0; i < n; ++i) { LF[i] = C[(int)(T[i] & 0xff)]++; }
            for(i = 0, t = 0; i < 256; ++i) { t += C[i]; C[i] = t - C[i]; }
            for(i = n - 1, t = 0; 0 <= i; --i) {
                t = LF[t] + C[(int)((U[i] = T[t]) & 0xff)];
                t += (t < pidx) ? 1 : 0;
            }
        }

        byte[] buf = new byte[4096 * 16];//64K block max
        int cnt = 0;//pointer to end
        int idx = 0;
        int[] dmax = new int[32];
        HashMap<Integer, String> dict;

        private boolean two = false;
        private int vala = 0;
        private int valb = 0;

        private int reader() throws IOException {
            int i = in.read();
            if(i == -1) throw new EOFException("End Of Stream");
            return i;
        }

        private char inCount(boolean small, boolean tiny) throws IOException {
            if(tiny) return (char)reader();
            if(small) {
                if(!two) {
                    vala = reader();
                    valb = reader();
                    int valc = reader();
                    vala += (valc << 4) & 0xf00;
                    valb += (valc << 8) & 0xf00;
                    two = true;
                } else {
                    vala = valb;
                    two = false;
                }
                return (char)vala;
            }
            int val = reader() << 8;
            val += reader();
            return (char)val;
        }

        public InputStream(java.io.InputStream in) {
            super(in);
        }

        @Override
        public int available() throws IOException {
            return cnt - idx;
        }

        @Override
        public void close() throws IOException {
            in.close();
        }

        private void doReads() throws IOException {
            if(available() == 0) {
                two = false;//align
                if(dict == null) {
                    dict = new HashMap<>();
                    for(int i = 0; i < 32; i++) {
                        dmax[i] = 256;
                    }
                }
                cnt = inCount(false, false);
                char[] count = new char[256];
                char tmp;
                for(int j = 0; j < 2; j++) {
                    for (int i = 0; i < 256; i++) {
                        count[i] += tmp = (char)(inCount(false, true) << (j == 1?8:0));
                        if (tmp == 0) {
                            i += inCount(false, true) - 1;
                        }
                    }
                }
                for(int i = 1; i < 256; i++) {
                    count[i] += count[i - 1];//accumulate
                }
                if(cnt != count[255]) throw new IOException("Bad Input Check (character count)");
                int choose = inCount(false, false);//read index
                if(cnt < choose) throw new IOException("Bad Input Check (selected row)");
                byte[] build;//make this
                //then lzw
                //rosetta code
                int context = 0;
                int lastContext = 0;
                String w = "" + inCount(true, false);
                StringBuilder result = new StringBuilder(w);
                while (result.length() < cnt) {//not yet complete
                    char k = inCount(true, false);
                    String entry;
                    while(result.length() > count[context]) {
                        context++;//do first
                        if (context > 255)
                            throw new IOException("Bad Input Check (character count)");
                    }
                    if(k < 256)
                        entry = "" + k;
                    else if (dict.containsKey(((context & 0x1f) << 16) + k))
                        entry = dict.get(((context & 0x1f) << 16) + k);
                    else if (k == dmax[context & 0x1f])
                        entry = w + w.charAt(0);
                    else
                        throw new IOException("Bad Input Check (token: " + k + ")");
                    result.append(entry);
                    // Add w+entry[0] to the dictionary.
                    if(lastContext == context) {
                        if (dmax[context & 0x1f] < 0x1000) {
                            dict.put(((context & 0x1f) << 16) +
                                    (dmax[context & 0x1f]++),
                                    w + entry.charAt(0));
                        }
                        w = entry;
                    } else {
                        //context change
                        context = lastContext;
                        //and following context should be a <256 ...
                        if(result.length() < cnt) {
                            w = "" + inCount(true, false);
                            result.append(w);
                        }
                    }
                }
                build = result.toString().getBytes();
                //working buffers
                int[] wrk = new int[buf.length];
                unbwt(build, buf, wrk, buf.length, choose);
                idx = 0;//ready for reads
                if(!two) inCount(true, false);//aligned data
            }
        }

        @Override
        public int read() throws IOException {
            try {
                doReads();
                int x = buf[idx++];
                doReads();//to prevent avail = 0 never access
                return x;
            } catch(EOFException e) {
                return -1;
            }
        }

        @Override
        public long skip(long byteCount) throws IOException {
            long i;
            for(i = 0; i < byteCount; i++)
                if(read() == -1) break;
            return i;
        }

        @Override
        public boolean markSupported() {
            return false;
        }

        @Override
        public synchronized void reset() throws IOException {
            throw new IOException("Mark Not Supported");
        }
    }
}

Disection of the Roots of the Mass Independent Space Equation

(v^2) v ‘ ‘ ‘ −9v v ‘ v ‘ ‘ 12(v ‘ ^3) (1−v^2/c^2)v ‘ (wv)^2
3 Constants 2 Constants 1 Constant 1 Constant
Square Power Linear Power Cubic Power Square and Quartic Power
3 Root Pairs 2 Roots 1 Root and 1 Root Pair 1 Root Pair and 2 Root Pairs
Energy and Force of Force Momentum, Force and Velocity of Force Cube of Force Force Energy
Potential Inertial Term Kinetic Inertial Term Strong Term Relativistic Force Energy Coupling
Gravity Dark Strong Weak EM

The fact there are 4 connected modes, as it were, imply there are 6 cross overs between modes of action, indicating that one term can be stimulated to convert into another term. The exact equilibrium points can be set as 6 differential equation forms, with some further analysis required of stable phase space bounds, and unstable phases at which to alter the balance to have a particular effect. Installing a constant (or function) of proportionality in each of the following balance equations would in effect allow some translation of one term ‘resonance’ into another.

v v ‘ ‘ ‘=−9 v ‘ v ‘ ‘ 3 Const and 1 root point
(v^2) v ‘ ‘ ‘=12(v ‘ ^3) 3 Const and 6 root points
v ‘ ‘ ‘=(1−v^2/c^2)v ‘ w^2 3 Const and 2 root points
−9v v ‘ ‘=12(v ‘ ^2) 2 Const and 2 root points
−9 v ‘ ‘=(1−v^2/c^2)(w^2) v 2 Const and 2 root points
12(v ‘ ^2)=(1−v^2/c^2)(wv)^2 1 Const and 12 root points

Another interesting point is 3 of the 6 are independent of w (omega mass oscillation frequency), and also by implication relativistic dependence on c.




The 3D Flavour Tensor in Analogue to the 4D of Einstein, for a 3D, 4D Curvature in Particle Physics

I like to keep updated about particle physics and LHC things, to quite an advanced level. My interest is in fields and their previous engineering value in radio waves and electronics in general. It makes sense to move to a tensor algebra in the 2+1 charge space, just as was done for the theory of gravitation. In some sense the conservation of acceleration becomes a conservation of net mapped curvature and it becomes funny via Noether’s Theorem.

CP violation as a horizon delta of radius of curvature from the “t” distance is perhaps relevant phrased as a moment of inertia in the 2+1, and its resultant geometric singular forms. This does create the idea of singular forms in the 2+1 space orbiting (or perhaps more correctly resonating) in tune with singularities in the 3+1 space. This interconnection entanglement, or something similar is perhaps connected to the “weak phase”.

So a 7D total space-time, with differing invariants in the 3D and 4D parts. The interesting thing from my prospective is the prediction of a heavy graviton, and conservation of acceleration. The idea that space itself holds its own shape without graviton interaction, and so conserves acceleration, while the heavy graviton can be a short range force which changes the curvature. The graviton then becomes a mediator of jerk and not acceleration. The graviton, being heavy would also travel slower than light. Gravity waves would then not necessarily need graviton exchange.

Quantization of theories has I think in many ways gone too far. I think the big breaks of the 21st century will be turning quantized bulk statistics into unquantized statistics, with quantization applied to only some aspects of theories. The implication is that dark matter is bent spacetime, without matter being present to emit gravitons. In this sense I predict it is not particulate.

So 7D and a differential phase space coordinate for each D (except time) gives a 13D reality. The following is an interesting equation I arrived at at one point for velocity solutions to uncertainty. I did not incorporate electromagnetism, but it’s interesting in the number of solutions, or superposition of velocity states as it were. The w being constant in the assumption, but purtubative expansion in it may be interesting. The units of the equation are conveniently force. A particle observing another particle would also be moving such, and the non linear summation for the lab rest frame of explanation might be quite interesting.

(v^2) v ‘ ‘ ‘−9v v ‘ v ‘ ‘+12(v ‘ ^3)+(1−v^2/c^2)v ‘ (wv)^2=0

With ‘ representing differential w.r.t. time notation. So v’ is acceleration and v” is the jerk. I think v”’ is called the jounce for those with a mind to learn all the Js. An interesting equation considering the whole concept of uncertain geometry started from an observation that relative mass was kind of an invariant, mass oscillation, although weird with RMS mass and RMS energy conservation, was perhaps a good way of parameterizing an uncertainty “force” proportional to the kinetic energy momentum product. As an addition it was more commutative as a tensor algebra. Some other work I calculated suggests dark energy is conservation of mass times log of normalized velocity, and dark matter could be conserved acceleration with gravity and the graviton operating to not bend space on density, but bend space through a short distance acting heavy graviton. Changes in gravity could thus travel slower than light, and an integral with a partial fourth power fraction could expand into conserved acceleration, energy, momentum and mass information velocity (dark energy) with perhaps another form of Higgs, and an uncertainty boson (spin 1) as well.

So really a 13D geometry. Each velocity state in the above mass independent free space equation above is an indication of a particle of differing mass. A particle count based on solutions. 6 quarks and all. An actual explanation for the three flavours of matter? So assuming an approximate linear superposable solution with 3 constants of integration, this gives 6 parameterized solutions from the first term via 3 constants and the square being rooted, The second tern involves just 2 of the constants for 2 possible offsets, and the third term involves just one of the constants, but 3 roots with two being in complex conjugation. The final term involves just one of the constants, but an approximation to the fourth power for 4 roots, and disappearing when the velocity is the speed of light, and so is likely a rest mass term.

So that would likely be a fermion list. A boson list would be in the boundaries at the discontinuities between those solutions, with the effective mass of the boson controlled by the expected life time between the states, and the state energy mismatch. Also of importance is how the equation translates to 4D, 3D spacetime, and the normalized rotational invariants of EM and other things. Angular momentum is conserved and constant (dimensionless in uncertain geometry),

Assuming the first 3 terms are very small compared to the last term, and v is not the speed of light. There would have to be some imaginary component to velocity, and this imaginary would be one of the degrees of freedom (leading to a total of 26). Is this imaginary velocity consistent with isospin?

Yang–Mills Existence and Mass Gap (Clay Problem)

If mass oscillation is proved to exist, then the mass gap can never be proved to be greater than zero as the mass must pass through zero for oscillation. This does exclude the possibility of complex mass oscillation, but this is just mass shrinkage (no eventual gap in the infinite time limit), or mass growth, and hence no minimum except in the big bang.

The 24 degrees of freedom on the relativistic compacted holographic 3D for the 26D string model, imply with elliptic functions, a 44 fold way. This is a decomposition into 26 sporadic elliptic patterns, and 18 generational spectra patterns. With the differential equation above providing 6*2*(2+1) combinations from the first three terms, and the 3 constants of integration locating in “colour space” through a different orthogonal basis. Would provide 24 apparent solution types, with 12 of them having a complex conjugation relation as a pair for 36. If this is the isospin solution, then the 12 fermionic solutions have all been found. That leaves the 12 bosonic solutions (the ones without a conjugate in the 3rd term generative), with only 5 (or if a photon is special 4) having been found so far. If the bosonic sector includes the dual rooting via the second term for spin polarity, then of the six (with the dual degenerates cancelled), two more are left to be found if light is special in the 4th term.

This would also leave 8 of the 44 way in a non existent capacity. I’d maybe focus on them being gluons, and consider the third still to be found as a second form of Higgs. OK.

Displacement Currents in Colour Space

Maybe an interesting wave induction effect is possible. I’m not sure what the transmitter should be made of. The ABC modulation may make it a bit “alternate” near the field emission. So not caused by bosons in the regular sense, more the “transition bosons” between particle states. The specific transitions between energy states may (although it’s not certain), pull the local ABC field in a resonant or engineered direction. The actual ABC solution of this reality has to have some reasoning for being stable for long enough. This does not imply though that no other ABC solutions act in parallel, or are not obtainable via some engineering means.

General Update

An update on the current progress of projects and general things here at KRT. I’ve set about checking out TypeScript for using in projects. It looks good, has some hidden pitfalls on finding .m.ts files for underscore for example, but in general looks good. I’m running it over some JS to get more of a feel. The audio VST project is moving slowly, at oscillators at the moment, with filters being done. I am looking into cache coherence algorithms and strategies to ease hardware design at the moment too. The 68k2 document mentioned in previous post is expanding with some of these ideas in having a “stall on value match” register, with a “touch since changed” bit in each cache line.

All good.

The Processor Design Document in Progress

TypeScript

Well I eventually managed to get a file using _.reduce() to compile without errors now. I’ll test it as soon as I’ve adapted in QUnit 2.0.1 so I can write my tests to the build as a pop up window, an perhaps back load a file to then be able to save the file from within the editor, and hence to become parser frame.

Representation

An excerpt from the 68k2 document as it’s progressing. An idea on UTF8 easy indexing and expansion.

“Reducing the size of this indexing array can recursively use the same technique, as long as movement between length encodings is not traversed for long sequences. This would require adding in a 2 length (11 bit form) and a 3 length (16 bit form) of common punctuation and spacing. Surrogate pair just postpones the issue and moves cache occupation to 25%, and not quite that for speed efficiency. This is why the simplified Chinese is common circa 2017, and surrogate processing has been abandoned in the Unicode specification, and replaced by characters in the surrogate representation space. Hand drawing the surrogates was likely the issue, and character parts (as individual parts) with double strike was considered a better rendering option.

UTF8 therefore has a possible 17 bit rendering for due to the extra bit freed by not needing a UTF32 representation. Should this be glyph space, or skip code index space, or a mix? 16 bit purity says skip code space. With common length (2 bit) and count (14 bit), allowing skips of between 16 kB and 48 kB through a document. The 4th combination of length? Perhaps the representation of the common punctuation without character length alterations. For 512 specials in the 2 length form and 65536 specials in the 3 length forms. In UTF16 there would be issues of decode, and uniqueness. This perhaps is best tackled by some render form meta characters in the original Unicode space. There is no way around it, and with skips maybe UTF8 would be faster.”


// tool.js 1.1.1
// https://kring.co.uk
// (c) 2016-2017 Simon Jackson, K Ring Technologies Ltd
// MIT, like as he said. And underscored :D

import * as _ from 'underscore';

//==============================================================================
// LZW-compress a string
//==============================================================================
// The bounce parameter if true adds extra entries for faster dictionary growth.
// Usually LZW dictionary grows sub linear on input chars, and it is of note
// that after a BWT, the phrase contains a good MTF estimate and so maybe fine
// to append each of its chars to many dictionary entries. In this way the
// growth of entries becomes "almost" linear. The dictionary memory foot print
// becomes quadratic. Short to medium inputs become even smaller. Long input
// lengths may become slightly larger on not using dictionary entries integrated
// over input length, but will most likely be slightly smaller.

// DO NOT USE bounce (=false) IF NO BWT BEFORE.
// Under these conditions many unused dictionary entries will be wasted on long
// highly redundant inputs. It is a feature for pre BWT packed PONs.
//===============================================================================
function encodeLZW(data: string, bounce: boolean): string {
var dict = {};
data = encodeSUTF(data);
var out = [];
var currChar;
var phrase = data[0];
var codeL = 0;
var code = 256;
for (var i=1; i<data.length; i++) {
currChar=data[i];
if (dict['_' + phrase + currChar] != null) {
phrase += currChar;
}
else {
out.push(codeL = phrase.length > 1 ? dict['_'+phrase] : phrase.charCodeAt(0));
if(code < 65536) {//limit
dict['_' + phrase + currChar] = code;
code++;
if(bounce && codeL != code - 2) {//code -- and one before would be last symbol out
_.each(phrase.split(''), function (chr) {
if(code < 65536) {
while(dict['_' + phrase + chr]) phrase += chr;
dict['_' + phrase + chr] = code;
code++;
}
});
}
}
phrase=currChar;
}
}
out.push(phrase.length > 1 ? dict['_'+phrase] : phrase.charCodeAt(0));
for (var i=0; i<out.length; i++) {
out[i] = String.fromCharCode(out[i]);
}
return out.join();
}

function encodeSUTF(s: string): string {
s = encodeUTF(s);
var out = [];
var msb: number = 0;
var two: boolean = false;
var first: boolean = true;
_.each(s, function(val) {
var k = val.charCodeAt(0);
if(k > 127) {
if (first == true) {
first = false;
two = (k & 32) == 0;
if (k == msb) return;
msb = k;
} else {
if (two == true) two = false;
else first = true;
}
}
out.push(String.fromCharCode(k));
});
return out.join();
}

function encodeBounce(s: string): string {
return encodeLZW(s, true);
}

//=================================================
// Decompress an LZW-encoded string
//=================================================
function decodeLZW(s: string, bounce: boolean): string {
var dict = {};
var dictI = {};
var data = (s + '').split('');
var currChar = data[0];
var oldPhrase = currChar;
var out = [currChar];
var code = 256;
var phrase;
for (var i=1; i<data.length; i++) {
var currCode = data[i].charCodeAt(0);
if (currCode < 256) {
phrase = data[i];
}
else {
phrase = dict['_'+currCode] ? dict['_'+currCode] : (oldPhrase + currChar);
}
out.push(phrase);
currChar = phrase.charAt(0);
if(code < 65536) {
dict['_'+code] = oldPhrase + currChar;
dictI['_' + oldPhrase + currChar] = code;
code++;
if(bounce && !dict['_'+currCode]) {//the special lag
_.each(oldPhrase.split(''), function (chr) {
if(code < 65536) {
while(dictI['_' + oldPhrase + chr]) oldPhrase += chr;
dict['_' + code] = oldPhrase + chr;
dictI['_' + oldPhrase + chr] = code;
code++;
}
});
}
}
oldPhrase = phrase;
}
return decodeSUTF(out.join(''));
}

function decodeSUTF(s: string): string {
var out = [];
var msb: number = 0;
var make: number = 0;
var from: number = 0;
_.each(s, function(val, idx) {
var k = val.charCodeAt(0);
if (k > 127) {
if (idx < from + make) return;
if ((k & 128) != 0) {
msb = k;
make = (k & 64) == 0 ? 2 : 3;
from = idx + 1;
} else {
from = idx;
}
out.push(String.fromCharCode(msb));
for (var i = from; i < from + make; i++) {
out.push(s[i]);
}
return;
} else {
out.push(String.fromCharCode(k));
}
});
return decodeUTF(out.join());
}

function decodeBounce(s: string): string {
return decodeLZW(s, true);
}

//=================================================
// UTF mangling with ArrayBuffer mappings
//=================================================
declare function escape(s: string): string;
declare function unescape(s: string): string;

function encodeUTF(s: string): string {
return unescape(encodeURIComponent(s));
}

function decodeUTF(s: string): string {
return decodeURIComponent(escape(s));
}

function toBuffer(str: string): ArrayBuffer {
var arr = encodeSUTF(str);
var buf = new ArrayBuffer(arr.length);
var bufView = new Uint8Array(buf);
for (var i = 0, arrLen = arr.length; i < arrLen; i++) {
bufView[i] = arr[i].charCodeAt(0);
}
return buf;
}

function fromBuffer(buf: ArrayBuffer): string {
var out: string = '';
var bufView = new Uint8Array(buf);
for (var i = 0, arrLen = bufView.length; i < arrLen; i++) {
out += String.fromCharCode(bufView[i]);
}
return decodeSUTF(out);
}

//===============================================
//A Burrows Wheeler Transform of strings
//===============================================
function encodeBWT(data: string): any {
var size = data.length;
var buff = data + data;
var idx = _.range(size).sort(function(x, y){
for (var i = 0; i < size; i++) {
var r = buff[x + i].charCodeAt(0) - buff[y + i].charCodeAt(0);
if (r !== 0) return r;
}
return 0;
});

var top: number;
var work = _.reduce(_.range(size), function(memo, k: number) {
var p = idx[k];
if (p === 0) top = k;
memo.push(buff[p + size - 1]);
return memo;
}, []).join('');

return { top: top, data: work };
}

function decodeBWT(top: number, data: string): string { //JSON

var size = data.length;
var idx = _.range(size).sort(function(x, y){
var c = data[x].charCodeAt(0) - data[y].charCodeAt(0);
if (c === 0) return x - y;
return c;
});

var p = idx[top];
return _.reduce(_.range(size), function(memo){
memo.push(data[p]);
p = idx[p];
return memo;
}, []).join('');
}

//==================================================
// Two functions to do a dictionary effectiveness
// split of what to compress. This has the effect
// of applying an effective dictionary size bigger
// than would otherwise be.
//==================================================
function tally(data: string): number[] {
return _.reduce(data.split(''), function (memo: number[], charAt: string): number[] {
memo[charAt.charCodeAt(0)]++;//increase
return memo;
}, []);
}

function splice(data: string): string[] {
var acc = 0;
var counts = tally(data);
return _.reduce(counts, function(memo, count: number, key) {
memo.push(key + data.substring(acc, count + acc));
/* adds a seek char:
This assists in DB seek performance as it's the ordering char for the lzw block */
acc += count;
}, []);
}

//=====================================================
// A packer and unpacker with good efficiency
//=====================================================
// These are the ones to call, and the rest sre maybe
// useful, but can be considered as foundations for
// these functions. some block length management is
// built in.
function pack(data: any): any {
//limits
var str = JSON.stringify(data);
var chain = {};
if(str.length > 524288) {
chain = pack(str.substring(524288));
str = str.substring(0, 524288);
}
var bwt = encodeBWT(str);
var mix = splice(bwt.data);

mix = _.map(mix, encodeBounce);
return {
top: bwt.top,
/* tally: encode_tally(tally), */
mix: mix,
chn: chain
};
}

function unpack(got: any): any {
var top: number = got.top || 0;
/* var tally = got.tally; */
var mix: string[] = got.mix || [];

mix = _.map(mix, decodeBounce);
var mixr: string = _.reduce(mix, function(memo: string, lzw: string): string {
/* var key = lzw.charAt(0);//get seek char */
memo += lzw.substring(1, lzw.length);//concat
return memo;
}, '');
var chain = got.chn;
var res = decodeBWT(top, mixr);
if(_.has(chain, 'chn')) {
res += unpack(chain.chn);
}
return JSON.parse(res);
}

 

68k Continued …

A Continuation as it was Getting Long

The main thing in any 64 bit system is multi-processing. Multi-threading has already been covered. The CAS instruction is gone, and cache coherence is a big thing. So a supervisor level mutex? This is an obvious need. The extra long condition code register? How about a set of bits to set, and a stall if not zero? The bits could count down to zero over a number of cycles, leaving an opportunity to spin lock any memory location. Putting it in the status or condition code registers avoids the chip level cache shuffle. A non supervisor version would help user tasks. This avoids the need for atomic operations to a large extent, enough to not need them.

The fact a cache can reset pre-filled with “high memory” garbage, and not need empty bits, saves a little, but does need a little care on the compliance of boot sequence. A write back to the cache causes a cross core invalidate in most cache designs, There is an argument to set some status bit for ease of implementation. Resetting just the cache line would work, but remove a small section of memory from the 64 bit address space. A data invalidation queue would be useful to assist in the latency of reset to some synchronous opportunity, the countdown stall assisting in queue size management. As simultaneous write is a race condition, and a fail, by simultaneous deletion, the chip level mutexes must be used correctly. For the case where a cache load has to be performed, a double mutex count lock might have to be done. This infers that keeping the CPU ID somewhere to speed the second mutex lock might be beneficial.

Check cached, maybe repeat, set global, check cached, check global, maybe repeat, set cached, do is OK. A competing lock would fail maybe on the set cached if a time slice occurred just before it. An interrupt delay circuit would be needed for a number of instructions when the global is checked. The common access to the value either sets a stall timer or an interrupt stall timer, or a common timer register, with both behaviours. A synchronization window. Of course a badly written code piece could just set the cached, and ruin everything. But write range bounding would prevent this.

The next issue would be to sort out duplicating a read copy of a cache line into a local cache. This is so likely to be shared memory with the way software should be working. No process shares a cache line otherwise by sensible design of software. A read should get a clone from memory (to not clog a cache transfer bus if the other cache has not written). A cache should check for another cache written dirty, and send a read copy. A write should cause a delete invalidation on the other caches. If locks are correctly written this will preserve all writes. The cache bus only then has to send dirty copies, and in validations. Packet formats are then just an address, an RW bit, and the data width of a cache line (the last part just on the return bus), and the RW bit on the return bus is not used.

What happens when a second write happens, and a read only copy is in transit? It is invalid on arrival, but not responsible for any write back. This is L2 cache here. The L1 cache can also be data invalidated, but can stand the read delay. Given the write invalidate strategy, the packet in transit can be turned into an invalidate packet. The minor point is the synchronous assignment to the cache of the read copy at the same bus cycle edge as the write. This just needs a little logic to prevent this by “special address” forwarding. A sort of cancel on execute as it were.

It could be argued that sending over a read only copy is a bad idea and wastes by over connecting the caches. But to not send it would result in a L3 fetch of something not yet written to L3 yet, or the other option would be to stall based on address until it exits the cache on the other processor from under use of the associative address. That could take a very long time. The final issue is closing the mutex. The procedure is the same as opening, but using a different value to set cached. The mutex needs to be flushed? Nope, as the check cached will send a read only copy, and the set cached will invalidate the other dirty.

I think that makes for a minimal logic L2 cache. The L3 cache can be shared, and the T and S caches do not need coherence. Any sensible code would not need this. The D cache need invalidation only. The I cache should not need anything. When data is written to memory for later use as instructions, there is perhaps an issue, also with self modifying code, which frankly should be ignored as an issue. The L2 cache should get written with code, and a fetch should get a transferred read only copy. There would be no expectation of another write to the same memory location after scheduling execution.

There should be some cache coherence for DMA. There should be no expectation of write to a DMA block before the DMA output transfer is complete. The DMA therefore needs its own L2 cache “simulation” to receive read only updates, and to invalidate when DMA does an input source read. It is only slow off chip IO which necessitates a flush to L3 and main memory. Such things if handled well can allow the write back queue to only have elements entered onto it when hitting the L2 eviction cache. Considering that there is a block of memory which signals cache empty, it makes sense to just pass this write directly out, and latch for immediate continuation of execution, and stall only if the external bus cycle is not complete on a second write to those addresses. The input read on those addresses have to stall by default if a simplistic ideology is taken.

A more complex method is to indicate a pre-fetch. In a similar way to the 1 item buffer. I hope your IO does not read trigger events (unlikely, but write triggering is not unheard of). A delay 1 item buffer does help with a bit of for knowledge, and the end of bus cycle latching into this delay slot can be used to continue processing and routine setup. Address latches internally help with the clock domain crossing. The only disadvantage to this is that the processor decides the memory mapped device layout. It would be of benefit to shuffle this slow bus over a serial protocol. This makes an external PLA, micro-controller or FPGA suitable for running the slow bus, and keeps PIN count on the main CPU lower, allowing main memory to be placed and routed closer to the core CPU “silicon chip” die.

L4 Cache

The concept of cloud as galactic cache is perhaps a thing that some are new to. There is the sequential stride static column idea, which is good for some processing and in effect gives the I cache the highest performance, and shows the sequential stride the best. For tasks that flood the D cache, which is most when heavy optimization is used, The question becomes “is there a need for associative L4 cache off chip?” for an effective use of some MB of static single cycle RAM? With a bus size of 32 bits data, and a 64 bit addressing system, the tag would exceed the data in consuming the SRAM. If the memory bus is just 32 bit addressing, with some DMA SD card trickery for the high word, this still makes the tag large, but less than 50% of the SRAM usage. Burst mode in this sense is auto increment on the low addresses within the SRAM chip to stop copper trace charge power wastage, and a tag check wait state and DRAM access generator. The fact that DRAM is accessed in blocks makes the tag shrink further.

Yes it’s true, DRAM should have “some” associative SRAM as well as static column banks. But the net effect on performance is minimal. It’s more of a L3 eviction cache extender. Things down to the RAM disk has “read only” or even “no action” system files on it for all the memory, makes file buffers actually be files. Most people would not appreciate this level of detail, but it does however allow for easy contraction and expansion of the statically sized RAM disk. D cache thrashing is the problem to solve. Make it bigger. The SDRAM issue can be solved by putting a fair amount of it there, and placing a cloud in higher (or different address space) memory. The SD card interface is then the location of the network interface. Each part of memory is then divided into 3 parts at this level. A direct part, and associative part assisting another cache level and a tag part. the tag part and the associative assist for an interleave partition. Browser caches are a system level feature, not an issue for application developers. The flush cache is “new disk, new net” and beyond.

How to present a file browser picture of the web? FTP sort of did it for files. And a bookmark and search view seem like a good way of starting out. There maybe should be an unknown folder with some entropicly selected default folders based on wordage, and the web becomes seen in scope. At the level of a site index, the “tool type” should render a page view of the “folder”. The source view may also be relevant for some. People have seen this before, or close to it, and made some interesting research tools. I suppose it does not add up to profit per say, but has much more context in robot living allowance world.

This is the end of part II, and maybe more …

 

  • Addressing (An, Dn<<{size16*SHS}.{DS<<size}, d8/24) so that the 2 bit DS field indexes one of 4 .W or larger.fields, truncated to .B, .W, .L or .Q with apparently even more options to spare. If SHS == 3 for example the DS > 1 have no extra effect. I’ll think on this (18th Feb 2017).
  • If SHS == 2 then DS == 3 has no effect. If SHS == 1 then DS == 3 does have an effect.
  • This can provide for 3 extra addressing modes not yet developed.

68k2-PC#d12 It’s got better! A fab addressing mode. More then 12 bits of embed-able opcode space remains in a 32 bit wide opcode extension, and almost all the 16 bit opcodes are used (all the co-processor F line slots). With not 3, but 10 extra addressing modes.

LZW (Perhaps with Dictionary Acceleration) Dictionaries in O(m) Memory

Referring to a previous hybrid BWT/LZW compression method I have devised, the dictionary of the LZW can be stored in chain linked fixed size structure arrays one character (the symbol end) back linking to the first character through a chain. This makes efficient symbol indexing based on number, and with the slight addition of two extra pointers, a set of B-trees can be built separated by symbol length to also be loaded in inside parallel arrays for fast incremental finding of the existence of a symbol. A 16 bucket move to front hash table could also be used instead of a B-tree, depending on the trade off between memory of a 2 pointer B-tree, or a 1 pointer MTF collision hash chain.

On the nature of the BWT size, and the efficiency. Using the same LZW dictionary across multiple BWT blocks with the same suffix start character is effective with a minor edge effect, rapidly reducing in percentage as the block size increases. An interleave reordering such that the suffix start character is the primary group by of linearity, assists in the scan for serachability. The fact that a search can be rephrased as a join on various character pairings, the minimal character pair can be scanned up first, and “joined” to the end of the searched for string, and then joined to the beginning in a reverse search, to then pull all the matches sequentially.

Finding the suffixes in the LZW structure is relatively easy to produce symbol codes, to find the associated set of prefixes and infixes is a little more complex. A mostly constant search string can be effectively compiled and searched. A suitable secondary index extension mapping symbol sequences to “atomic” character sequences can be constructed to assist in the transform of characters to symbol dictionary index code tuples. This is a second level table in effect, which can be also compressed for atom specific search optimization without the LZW dictionary loading without find.

The fact the BWT infers an all matches sequential nature, and a second level of BWT with the dictionary index codes as the alphabet could defiantly reduce the needed scan time for finding each LZW symbol index sequence. Perhaps a unified B-tree as well as the length specific B-tree within the LZW dictionary would be useful for greater and less than constraints.

As the index can become a self index, there maybe a need to represent a row number along side the entry. Multi column indexes, or primary index keys would then best be likely represented as pointer tuples, with some minor speed size data duplication in context.

An extends chain pointer and a first of extends is not required, as the next length B-tree will part index all extenders. A root pointer to the extenders and a secondary B-tree on each entry would speed finding all suffix or contained in possibilities. Of course it would be best to place these 3 extra pointers in a parallel structure so not to be data interleaved array of struct, but struct of array, when dynamic compilation of atomics is required.

The find performance will be slower than an uncompressed B-tree, but the compression is useful to save storage space. The fact that the memory is used more effectively when compression is used, can sometimes lead to improved find performance for short matches, with a high volume of matches. An inverted index can use the position index of the LZW symbol containing the preceding to reduce the size of the pointers, and the BWT locality effect can reduce the number of pointers. This is more standard, and combined with the above techniques for sub phases or super phrases should give excellent find performance. For full record recovery, the found LZW symbols only provide decoding in context, and the full BWT block has to be decoded. A special reserved LZW symbol could precede a back pointer to the beginning of the BWT block, and work as a header of the post placed char count table and BWT order count.

So finding a particular LZW symbol in a block, can be iterated over, but the difficulty in speed is when the and condition comes in on the same inverse index. The squared time performance can be reduced? Reducing the number and size of the pointers in some ways help, but it does not reduce the essential scan and match nature of the time squared process. Ordering the matching to the “find” with least number on the count makes the iteration smaller on average, as it will be the least found, and hence least joined. The limiting of the join set to LZW symbols seems like it will bloom many invalid matches to be filtered, and in essence simplistically it does. But the lowering of the domain size allows application of some more techniques.

The first fact is the LZW symbols are in a BWT block subgroup based on the following characters. Not that helpful but does allow a fast filter, and less pointers before a full inverse BWT has to be done. The second fact is that the letter pair frequency effectively replaces the count as the join order priority of the and. It is further based on the BWT block subgroup size and the LZW symbol character counts for calculation of a pre match density of a symbol, this can be effectively estimated via statistics, and does not need a fetch of the actual subgroup size. In collecting multiple “find” items correlations can also be made on the information content of each, and a correlated but rarer “find” may be possible to substitute, or add in. Any common or un correlated “find” items should be ignored. Order by does tend to ruin some optimizations.

A “find” item combination cache should be maintained based on frequency of use and execution time to rebuild result both used in the eviction strategy. This in a real sense is a truncated “and” index. Replacing order by by some other method of such as order float, such that guaranteed order is not preserved, but some semblance of polarity is run. This may also be very useful to reduce sort time, and prevent excessive activity and hence time spent when limit clauses are used. The float itself should perhaps be record linked, with an MTF kind of thing in the inverse index.