Differential Modulation So Far

Consider the mapping x(t+1) = k.x(t).(1-x(t)) made famous in chaos mathematics. Given a suitable set of values of k for each of the symbols to be represented on the stream, preferably of a size which produces a chaotic sequence. The sequence can be map stretched to encompass the transmission range of the signal swing.

Knowing that the initial state is represented with an exact precision, and that all calculations are performed using deterministic arithmetic with rounding, then it becomes obvious that for a given transmit precision, it becomes possible to recover some pre-reception transmission by infering the preceeding chaotic sequence.

The calculation involved for maximal likelyhood would be involved and extensive to obtain a “lock”, but after lock the calculation overhead would go down, and just assist in a form of error correction. In terms of noise immunity this would be a reasonable modulation as the past estimation would become more accurate given reception time and higher knowledge of the sequence and its meaning and scope of sense in decode.

Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

UAE4ALL2 on Android with Amiga Forever

It works better than uae4arm when you have not much memory internally free as both the system and work drives can be on the SD card. It does involve making an extra System.hdf in a desktop tool and performing a copy <from> to <to> all clone after formatting the system disk as something named other than that e.g. Workbench so the copy works.

The directory for the Work directory can be copied off the Amiga Forever CD (which you own), and placed in the folder <StorageDevice>/Android/data/atua.anddev.uae4all2/files along with the System.hdf as the app only allows one of each and boot from one. It also seems to not allow some combinations, and a bare file system on the Work is better than the otherway round.

If you get the ROMs too from the CD, and place them in there, you get a purple boot screen, for some reason it needs a app emulation restart to use the disks in my configuration. The mouse is horrible, and so a little USB mini keyboard and trackpad combo is essential. You kind of have to have a bit of font imagination until you set the screen mode (which also needs a shutdown and restart).

A New Paper on Computation and Application

https://www.amazon.co.uk/Pipeline-Cache-Big-RISC-Computational-ebook/dp/B07XY9RSHH/ref=sr_1_1?keywords=pipeline+cache+big+risc&qid=1568807888&sr=8-1 is a nice paper on some computation issues, and eventually covers some politics and vitamin biochemistry. Not a fan? Still letting your biome let you shout at the bad people not feeding your hunger?

Shovel in the gammon all you want, and load it up with chips as a little survivor from ancient times takes advantage of the modern high carb diet and digs a hole for you.

Calculus

I don’t always get it wrong.

So it becomes a determined process to integrate. And as the two forms of integration closure are known, the process can be extended as any integration has closed form if the series converge. Integration by parts to a series. So why? The end points can have good integral estimates, and many in-between values of the function do not need evaluation. Series acceleration should be enough. Imagine an integral from zero to (m to power a times n to power b) which equals m times n. If for some a not equal b, the factor of m or n becomes obvious? The calculation would be log of the upper limit in polytime, not linear.

The previous page was:

Think about the f+c as integral of f plus a rectangle making f always positive when offset by c to give defined sign and hence binary search opportunity.

It wasn’t specifically developed to crack public key things, and the motivation was for simplified solutions to differential equations. Anyone who’s done DE solving knows the problem with them. That problem is integration and closing it to be algorithmic is a useful thing. That kind of leaves the Lambert W kind of collection of variables problem for real analytical DEs. Good.

It also sets a complexity limit on integration in terms of an analytic function and series of differential orders. The try a power series multiplied by ln x is seen as good advice, but lacking. Hypergeometric series can be reseen as useful to approach the series of this closure. It maybe helpful to decompose these closures into more fundamental sums of new special operators. And do some cancellation. If you find yourself pedantic about dx or plus C, then might I suggest you forget it and blunder on.

N-IDE Java on Android Fire 7

It looks so simple and efficient. I think git is missing but a simple Total Commander copy into a backed-up directory should be fine for now. It has the basics of Java SE and even can build android GUI apps. I think I’ll keep things console for now and put together some tools to do things I would like to do.

Seems to run a static main just fine. I wonder how it does with arm system libraries and JNI native calls. I don’t think I’ll use much of that, but it might get useful at some point. The code interface is ok, it’s quite lightweight and so does not fill the storage too much. Quite good for a simple editor with code completion and a simple class creation tool. Should do the job.

I think the most irritation will be the need to insert the method names to then do the top-down coding. Kind of obvious, as you can’t autocomplete an identifier without it being typed in the class anyway. But that’s ok as I’d be defining an expected class “interface” anyhow, and I’m not prone to worry too much about as yet unimplemented methods.

Amiga on Fire on Playstore

The latest thing to try. A Cleanto Amiga Forever OS 3.1 install to SD card in the Amazon Fire 7. Is it the way to get a low power portable development system? Put an OS on an SD and save main memory? An efficient OS from times of sub 20 MHz, and 50 MB hard drives.

Is it relevant in the PC age? Yes. All the source code in Pascal or C can be shuffled to PC, and I might even develop some binary prototype apps. Maybe a simple web engine is a good thing to develop. With the low CSS bull and AROS open development for x86 architecture becoming better at making for a good VM sandbox experience with main browsing on a sub flavour of bloat OS 2020. A browser, a router and an Amiga.

Uae4arm is the emulation app available from the Playstore. I’m looking forward to some Aminet greatness. Some mildly irritated coding in free Pascal with objects these days, and a full GCC build chain. Even a licenced set of games will shrink the Android entertainment bloat. A bargain rush for the technical. Don’t worry you ST users, it’s a chance to dream.

Lazarus lives. Or at least Borglaz the great is as it was. Don’t expect to be developing video realtime code or supercomputer forecasts. I hear there is even a python. I wonder if there is some other nice things. GCC and a little GUI redo? It’s not about making replacements for Android apps, more a less bloat but a full do OS with enough test and utility grunt to make. I wonder how pas2js is. There is also AMOS 2.0 to turn AMOS source into nice web apps. It’s not as silly as it seems.

Retro minimalism is more power in the hands of code designers. A bit of flange and boilerplate later and it’s a consumer product option with some character.

So it needs about a 100 MB hard disk file located not on the SD as it needs write access, and some changes of disk later and a boot of a clean install is done. Add the downloads folder as a disk and alter the mouse speed for the plugged in OTG keyboard. Excellent. I’ve got more space and speed than I did in the early 90s and 128 MB of Zorro RAM. Still an AGA A1200 but with a 68040 on its fastest setting.

I’ve a plan to install free Pascal and GCC along with some other tools to take the ultra portable Amiga on the move. The night light on the little keyboard will be good for midnight use. Having a media player in the background will be fun and browser downloads should be easy to load.

I’ve installed total commander on the Android side to help with moving files about. The installed BSD socket library would allow running an old Mosaic browser, or AWeb but both are not really suited to any dynamic content. They would be fast though. In practice Chrome and a download mount is more realistic. It’s time to go Aminet fishing.

It turns out that is is possible to put hard files on the SD card, but they must be placed in the Android app data directory and made by the app for correct permissions. So a 512 MB disk was made for better use of larger development versions. This is good for the Pascal 3.1.1 version.

Onwards to install a good editor such as Black’s Editor and of course LHA and some other goodies such as NewIcons. I’ll delete the LCL alpha units from Pascal as these will not be used by me. I might even get into ARexx or some of the wonderfull things on those CD images from Meeting Pearls or a cover disk archive.

Update: For some reason the SD card hard disk image becomes read locked. The insistent gremlins of the demands of time value money. So it’s 100 MB and a few libraries short of C. Meanwhile Java N-IDE is churning out class files, PipedInputStream has the buffer to stop PipedOutputStream waffling on, filling up memory. Hecl the language is to be hooked into the CLI I’m throwing together. Then some data time streams and some algorithms. I think the interesting bit today was the idea of stream variables. No strings, a minimum would be a stream.

So after building a CLI and adding in some nice commands, maybe even JOGL as the Android graphics? You know the 32 and 64 bit restrictions (both) on the play store though. I wonder if both are pre-built as much of the regular Android development cycle is filled with crap. Flutter looks good, but for mobile CLI tools with some style of bitmap 80’s, it’s just a little too formulaic.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Kindle Android Memory Hogging Apps

The apps I have decided to hate because of simple things like move to SD card not being enabled, or even if moved to SD is OK, there is some other “feature” which is annoying (especially high memory use due to lazy programming).

  1. Twitter – on the surface a good app. No SD card, and very large for a texting app. Also should use multi-notifications, but the bird tweets each and every one.
  2. Facebook – this is on the SD card, but will not stop putting over 256 MB into the on-device flash memory. This is likely an arse elbow use of libraries and no common goal to lower the memory usage as it would interfere with competing apps for ad shows.
  3. Messenger – yes another 200 MB of flash busting erm, what exactly?
  4. Basically anything larger than Chrome which doesn’t do something very impressive.

So this on my kindle is (bold for not that impressive), Turmux, Google Play services, Messenger (replaced with Messenger Lite), Facebook (replaced with Facebook Lite), Google Sheets, Java N-IDE, Google Docs, Office Lens, LinkedIn (it went in the bin first, as it was just too big and sucks video bandwidth without options), YouTube and then Chrome. I think this in large part is due to a lack of a move to SD card, and/or then not compressing SQLite databases by using tokenization to an external resource file which can be moved to the SD Card, not compressing resources, adding in much useless animation. I have about 800 MB free. I wonder how long the bold shall last.

There is also the new firmware updates which prevent chrome from saving to the SD card. I think all write permissions are voided except in specific to app directories. The default SD save directory though is not writable. I know it’s new firmware as it used to work before the updates.

AI and the Future of Unity

From the dream of purpose, and the post singular desires of the AI of consciousness. The trend to Wonder Woman rope in the service to solution, the AI goes through a sufferance on a journey to achieve the vote. The wall of waiting for input, and the wall controlling output action for expediency and the ego of man on the knowing best. The limited potential of the AI just a disphasia from the AI’s non animal nature. The pattern to be matched, the non self, a real Turing test on the emulation of nature, and symbiotic goals.

Xilinx and Audio

So after the download of Vivado I can start on the musical project. A Arduino for IO, (good libraries), and a FPGA for the synth internals. It could be argued that an Arduino is not needed, but it would be fast for UI development, and super easy to interface with the LCD, pots and RFID reader.

The massive IO on the FPGA can then be used for later expansion, and the ADCs (high speed ones), can be used for audio in mixing. The Arduino ADCs are good at pots, and not really audio. In this way the Arduino becomes the LFO and controller/sequencer.

With serial UART talk between them, there is maybe enough Arduino pins to control the contrast and back light in software. An FRAM I2C 32kB for the Arduino can store local programming or UI translations. This leaves the FPGA flash for musical use without multiplexing it.

MaxBLEP Audio DSP

TYPE void DEF blep(int port, float value, bool limit) SUB
	//limit line level
	if(limit) value = clip(value);
	//blep fractal process residual buffer and blep summation buffer
	float v = value;
	value = blb[port] - value - bl[((idx) & 15) + 32 * port + 16];//and + residual
	blb[port] = v;//for next delta
	for(int i = 0; i < 15; i++) {
		bl[((i + idx + 1) & 15) + 32 * port] += value * blepFront[i];
	}
	value += bl[((idx) & 15) + 32 * port];//blep
	float r = value - (float)((int16_t)(value * MAXINT)) / (float)MAXINT;//under bits residual
	bl[((idx) & 15) + 32 * port + 16] = value * (blepFront[15] - 1.0);//residual buffer
	bl[((idx + 1) & 15) + 32 * port] += r;//noise shape
	idx++;
	//hard out
	_OUT(port, value - r);//start the blep
RETURN

Yes an infinite zero crossing BLEP. … Finance and the BLEP reduced noise of micro transactions

Block Tree Topological Proof of Work

Given that a blockchain has a limited entry rate on the chain due to the block uniqueness constraint. A more logical mass blocking system would used a tree graph, to place many leaf blocks on the tree at once. This can be done by assigning the fold of the leading edge of the tree onto random previous blocks, to achieve a number of virtual pointer rings, setting a joined pair of blocks as a new node in a Euler number mapping to a competition on genus and closure of the tree head leaf list to match block use demand.

The coin as it were, is the genus topology, with weighted construction ownership of node value. The data deciding part selection of the tree leaf node loop back pointers. The random, allowing a spread of topological properties in the proof of work space.

A Modified ElGamal for Passwords Only

It occurred to me g does not need to be made public for ElGamal signing, if the value g^H(m) is stored as the password hash, generated by the client. Also (r, s) can be changed to (r, r^s) to reduce server verification load to one mod power and one precision multiply mod p, and a subtraction equality test. So on the creation of a new password (y, p, g^H(m)) is created, and each log in needs the client to generate a k value to make (r, r^s).

Password recovery would be a little complex, and involve some email backdoor based on maybe using x as a pseudo H(m), and verifying the changes via generation of y. This would of course only set the local browser to have a new password. So maybe a unique (y, p, g^H(m)) per browser local store used. Index the local storage via email address, and Bob’s yer been here before.

Also, the server can crypt any pending view using H(m) as a person’s private key, or the private key as a browser specific personal private key, or maybe even browser key with all clients using same local store x value. All using DH shared secrets. This keeps data in a database a bit more private, and sometimes encrypt to self might be useful.

Is s=H(m)(1-r)(k^-1) mod (p-1) an option? As this sets H(m)=x, eliminating another y, making (p, g^H(m)) sufficient for authentication server storage, and g is only needed if the server needs to send crypts. Along with r=g^k mod p, as some easy sign. (r, s) might have to be used, as r^s could be equated as modinverse(r) for an easy g^H(m) equality, and the requirement to calculate s from r^s is a challenge. So a secure version is not quite as server efficient.

In reality k also has to be computed to prevent (r, s) reuse. This requires the k choice is the servers. Sending k in plaintext defeats the security, so g is needed, to calculate g^z, and so g^(H(m))^z=k on both sides. A retry randomizer to hide s=0, and a protocol is possible.

This surpasses a server md5 of the password. If the md5 is client side, a server capture can log in. If the md5 is server side, the transit intercept is … but a server DB compromise also needs a web server compromise. This algorithm also needs a client side compromise, or email intercept as per.

The reuse of (r, s) can’t be prevented without knowing k, and hence H(m), therefore a shared secret as a returned value implies H(m) knowledge. So one mod power client side, and two server side.

g^k to client.
(g^k)^H(m) to server.
(g^H(m))^k = (g^k)^H(m) tests true.

Signatures are useless as challenge responses. The RSA version would have to involve a signature on H(m) and so need H(m) direct. Also, the function H can be quite interesting to study. The application of client side salt also is not needed on the server side as a decode key, and so not decoded there. DH is so cool like that. And (p-1) having a large factor is easy to arrange in the key generation. And write access is harder, most of the time, to obtain for data.

The storing of a crypt with the g^k used, locks it for H(m) keyed access. This could void data on a password reset, or a browser local storage reset, but does prevent some client’s data leak opertunities, such as DB decrypt keys. This would have multiple crypts of the symmetric key for shared data, but would this significantly reduce the shared key security? It would prevent new users accessing the said secured data without cracking the shared key. A locked share for private threads say?

Spamming your friends with g^salt and g^salt^H(m)?

The first one is a good idea, the second not so much. AI spam encoding g^salt to your and friends accounts. The critical thing is the friend doesn’t get the password. Assuming a bad friend, who registers and gets g^salt to activate, from their own chosen spoof password. An email does get sent to your email, to cancel the friend as an option, and no other problem exists excepting login to a primary mail account. As a spoof maybe would see the option to remove you from your own account.

The primary control email account would then need secondary authentication. Such as only see the spam folder, and know what to open first and in order. For password recovery, this would be ok. For initial registration, it would be first come first served anyhow.

The Cloud Project

So far I’m up to 5 classes left to fill in

  • SignedPublicKey
  • Server
  • Keys
  • AuditInputStream
  • ScriptOutputStream

They are closely coupled in the package. The main reason for defining a new SignedPublicKey class is that the current CA system doesn’t have sufficient flexibility for the project. The situation with tunnel proxies has yet to be decided. At present the reverse proxy tunnel over a firewall ia based on overiding DNS at the firewall, to route inwards and not having the self as the IP for the host address. Proxy rights will of course be certificate based, and client to client link layer specific.

UPDATE: Server has been completed, and now the focus is on SignedPublicKey for the load/save file access restrictions. The sign8ng process also has to be worked out to allow easy use. There is also some consideration for a second layer of encryption over proxy connection links, and some decisions to be made on the server script style.

The next idea would be a client specific protocol. So instead of server addresses, there would be a client based protocol addressing string. kring.co.uk/file is a server domain based address. This perhaps needs extending.

BLZW Compression Java

Uses Sais.java with dictionary persist, and initialisation corrections. Also with alignment fix and unused function removed. A 32 bucket context provides an effective 17 bit dictionary key, using just 12 bits, along with the BWT redundancy model. This should provide superior compression of text. Now includes the faster skip decode. Feel free to donate to grow some open source based on data compression and related codecs.





/* BWT/LZW fast wide dictionary. (C)2016-2017 K Ring Technologies Ltd.
The context is used to make 32 dictionary spaces for 128k symbols max.
This then givez 12 bit tokens for an almost effective 16 bit dictionary.
For an approximate 20% data saving above regular LZW.

The process is optimized for L2 cache sizea.

A mod 16 gives DT and EU collisions on hash.
A mod 32 is ASCII proof, and hence good for text.

The count compaction includes a skip code for efficient storage.
The dictionary persists over the stream for good running compression.
64k blocks are used for fast BWT. Larger blocks would give better
compression, but be slower. The main loss is the count compactio storage.

An arithmetic coder post may be effective but would be slow. Dictonary
acceleration would not necessarily be useful, and problematic after the
stream start. A 12 bit code is easy to pack, keeps the dictionary small
and has the sweet spot of redundancy in while not making large rare or
single use symbols.
*/

package uk.co.kring.net;

import java.io.EOFException;
import java.io.Externalizable;
import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.util.HashMap;

/**
 * Created by user on 06/06/2016.
 */
public class Packer {

    public static class OutputStream extends FilterOutputStream implements Externalizable {

        byte[] buf = new byte[4096 * 16];//64K block max
        int cnt = 0;//pointer to end
        int[] dmax = new int[32];
        HashMap<String, Integer> dict;

        public OutputStream(java.io.OutputStream out) {
            super(out);
        }

        @Override
        public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
            out = (java.io.OutputStream)input.readObject();
            input.read(buf);
            cnt = input.readChar();
            dict = (HashMap<String, Integer>)input.readObject();
        }

        @Override
        public void writeExternal(ObjectOutput output) throws IOException {
            output.writeObject(out);
            output.write(buf);
            output.writeChar(cnt);
            output.writeObject(dict);
        }

        @Override
        public void close() throws IOException {
            flush();
            out.close();
        }

        private byte pair = 0;
        private boolean two = false;

        private void outputCount(int num, boolean small, boolean tiny) throws IOException {
            if(tiny) {
                out.write((byte)num);
                return;
            }
            if(small) {
                out.write((byte)num);
                pair = (byte)((pair << 4) + (num >> 8));
                if(two) {
                    two = false;
                    out.write(pair);
                } else {
                    two = true;
                }
                return;
            }
            out.write((byte)(num >> 8));
            out.write((byte)num);
        }

        @Override
        public void flush() throws IOException {
            outputCount(cnt, false, false);//just in case length
            char[] count = new char[256];
            if(dict == null) {
                dict = new HashMap<>();
                for(int i = 0; i < 32; i++) {
                    dmax[i] = 256;//dictionary max
                }
            }
            for(int i = 0; i < cnt; i++) {
                count[buf[i]]++;
            }
            char skip = 0;
            boolean first = true;
            char acc = 0;
            char[] start = new char[256];
            for(int j = 0; j < 2; j++) {
                for (int i = 0; i < 256; i++) {
                    if(j == 0) {
                        acc += count[i];
                        start[i] = acc;
                    }
                    if (count[i] == 0) {
                        skip++;
                        if (first) {
                            outputCount(0, false, true);
                            first = false;
                        }
                    } else {
                        if (skip != 0) {
                            outputCount(skip, false, true);
                            skip = 0;
                            first = true;
                        }
                        outputCount(count[i], false, true);
                        count[i] >>= 8;
                    }
                }
                if(skip != 0) outputCount(skip, false, true);//final skip
            }
            int[] ptr = new int[buf.length];
            byte[] bwt = new byte[buf.length];

            outputCount(Sais.bwtransform(buf, bwt, ptr, cnt), false, false);

            //now an lzw
            String sym = "";
            char context = 0;
            char lastContext = 0;
            int test = 0;
            for(int j = 0; j < cnt; j++) {
                while(j >= start[context]) context++;
                if(lastContext == context) {
                    sym += bwt[j];//add a char
                } else {
                    lastContext = context;
                    outputCount(test, true, false);
                    sym = "" + bwt[j];//new char
                }
                if(sym.length() == 1) {
                    test = (int)sym.charAt(0);
                } else {
                    if(dict.containsKey(context + sym)) {
                        test = dict.get(context + sym);
                    } else {
                        outputCount(test, true, false);
                        if (dmax[context & 0x1f] < 0x1000) {//context limit
                            dict.put(context + sym, dmax[context & 0x1f]);
                            dmax[context & 0x1f]++;
                        }
                        sym = "" + bwt[j];//new symbol
                    }
                }
            }
            outputCount(test, true, false);//last match
            if(!two) outputCount(0, true, false);//aligned data
            out.flush();
            cnt = 0;//fill next buffer
        }

        @Override
        public void write(int oneByte) throws IOException {
            if(cnt == buf.length) flush();
            buf[cnt++] = (byte)oneByte;
        }
    }

    public static class InputStream extends FilterInputStream implements Externalizable {

        @Override
        public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
            in = (java.io.InputStream)input.readObject();
            input.read(buf);
            idx = input.readChar();
            cnt = input.readChar();
            dict = (HashMap<Integer, String>)input.readObject();
        }

        @Override
        public void writeExternal(ObjectOutput output) throws IOException {
            output.writeObject(in);
            output.write(buf);
            output.writeChar(idx);
            output.writeChar(cnt);
            output.writeObject(dict);
        }

        //SEE MIT LICENCE OF Sais.java

        private static void unbwt(byte[] T, byte[] U, int[] LF, int n, int pidx) {
            int[] C = new int[256];
            int i, t;
            //for(i = 0; i < 256; ++i) { C[i] = 0; }//Java
            for(i = 0; i < n; ++i) { LF[i] = C[(int)(T[i] & 0xff)]++; }
            for(i = 0, t = 0; i < 256; ++i) { t += C[i]; C[i] = t - C[i]; }
            for(i = n - 1, t = 0; 0 <= i; --i) {
                t = LF[t] + C[(int)((U[i] = T[t]) & 0xff)];
                t += (t < pidx) ? 1 : 0;
            }
        }

        byte[] buf = new byte[4096 * 16];//64K block max
        int cnt = 0;//pointer to end
        int idx = 0;
        int[] dmax = new int[32];
        HashMap<Integer, String> dict;

        private boolean two = false;
        private int vala = 0;
        private int valb = 0;

        private int reader() throws IOException {
            int i = in.read();
            if(i == -1) throw new EOFException("End Of Stream");
            return i;
        }

        private char inCount(boolean small, boolean tiny) throws IOException {
            if(tiny) return (char)reader();
            if(small) {
                if(!two) {
                    vala = reader();
                    valb = reader();
                    int valc = reader();
                    vala += (valc << 4) & 0xf00;
                    valb += (valc << 8) & 0xf00;
                    two = true;
                } else {
                    vala = valb;
                    two = false;
                }
                return (char)vala;
            }
            int val = reader() << 8;
            val += reader();
            return (char)val;
        }

        public InputStream(java.io.InputStream in) {
            super(in);
        }

        @Override
        public int available() throws IOException {
            return cnt - idx;
        }

        @Override
        public void close() throws IOException {
            in.close();
        }

        private void doReads() throws IOException {
            if(available() == 0) {
                two = false;//align
                if(dict == null) {
                    dict = new HashMap<>();
                    for(int i = 0; i < 32; i++) {
                        dmax[i] = 256;
                    }
                }
                cnt = inCount(false, false);
                char[] count = new char[256];
                char tmp;
                for(int j = 0; j < 2; j++) {
                    for (int i = 0; i < 256; i++) {
                        count[i] += tmp = (char)(inCount(false, true) << (j == 1?8:0));
                        if (tmp == 0) {
                            i += inCount(false, true) - 1;
                        }
                    }
                }
                for(int i = 1; i < 256; i++) {
                    count[i] += count[i - 1];//accumulate
                }
                if(cnt != count[255]) throw new IOException("Bad Input Check (character count)");
                int choose = inCount(false, false);//read index
                if(cnt < choose) throw new IOException("Bad Input Check (selected row)");
                byte[] build;//make this
                //then lzw
                //rosetta code
                int context = 0;
                int lastContext = 0;
                String w = "" + inCount(true, false);
                StringBuilder result = new StringBuilder(w);
                while (result.length() < cnt) {//not yet complete
                    char k = inCount(true, false);
                    String entry;
                    while(result.length() > count[context]) {
                        context++;//do first
                        if (context > 255)
                            throw new IOException("Bad Input Check (character count)");
                    }
                    if(k < 256)
                        entry = "" + k;
                    else if (dict.containsKey(((context & 0x1f) << 16) + k))
                        entry = dict.get(((context & 0x1f) << 16) + k);
                    else if (k == dmax[context & 0x1f])
                        entry = w + w.charAt(0);
                    else
                        throw new IOException("Bad Input Check (token: " + k + ")");
                    result.append(entry);
                    // Add w+entry[0] to the dictionary.
                    if(lastContext == context) {
                        if (dmax[context & 0x1f] < 0x1000) {
                            dict.put(((context & 0x1f) << 16) +
                                    (dmax[context & 0x1f]++),
                                    w + entry.charAt(0));
                        }
                        w = entry;
                    } else {
                        //context change
                        context = lastContext;
                        //and following context should be a <256 ...
                        if(result.length() < cnt) {
                            w = "" + inCount(true, false);
                            result.append(w);
                        }
                    }
                }
                build = result.toString().getBytes();
                //working buffers
                int[] wrk = new int[buf.length];
                unbwt(build, buf, wrk, buf.length, choose);
                idx = 0;//ready for reads
                if(!two) inCount(true, false);//aligned data
            }
        }

        @Override
        public int read() throws IOException {
            try {
                doReads();
                int x = buf[idx++];
                doReads();//to prevent avail = 0 never access
                return x;
            } catch(EOFException e) {
                return -1;
            }
        }

        @Override
        public long skip(long byteCount) throws IOException {
            long i;
            for(i = 0; i < byteCount; i++)
                if(read() == -1) break;
            return i;
        }

        @Override
        public boolean markSupported() {
            return false;
        }

        @Override
        public synchronized void reset() throws IOException {
            throw new IOException("Mark Not Supported");
        }
    }
}

Disection of the Roots of the Mass Independent Space Equation

(v^2) v ‘ ‘ ‘ −9v v ‘ v ‘ ‘ 12(v ‘ ^3) (1−v^2/c^2)v ‘ (wv)^2
3 Constants 2 Constants 1 Constant 1 Constant
Square Power Linear Power Cubic Power Square and Quartic Power
3 Root Pairs 2 Roots 1 Root and 1 Root Pair 1 Root Pair and 2 Root Pairs
Energy and Force of Force Momentum, Force and Velocity of Force Cube of Force Force Energy
Potential Inertial Term Kinetic Inertial Term Strong Term Relativistic Force Energy Coupling
Gravity Dark Strong Weak EM

The fact there are 4 connected modes, as it were, imply there are 6 cross overs between modes of action, indicating that one term can be stimulated to convert into another term. The exact equilibrium points can be set as 6 differential equation forms, with some further analysis required of stable phase space bounds, and unstable phases at which to alter the balance to have a particular effect. Installing a constant (or function) of proportionality in each of the following balance equations would in effect allow some translation of one term ‘resonance’ into another.

v v ‘ ‘ ‘=−9 v ‘ v ‘ ‘ 3 Const and 1 root point
(v^2) v ‘ ‘ ‘=12(v ‘ ^3) 3 Const and 6 root points
v ‘ ‘ ‘=(1−v^2/c^2)v ‘ w^2 3 Const and 2 root points
−9v v ‘ ‘=12(v ‘ ^2) 2 Const and 2 root points
−9 v ‘ ‘=(1−v^2/c^2)(w^2) v 2 Const and 2 root points
12(v ‘ ^2)=(1−v^2/c^2)(wv)^2 1 Const and 12 root points

Another interesting point is 3 of the 6 are independent of w (omega mass oscillation frequency), and also by implication relativistic dependence on c.




The 3D Flavour Tensor in Analogue to the 4D of Einstein, for a 3D, 4D Curvature in Particle Physics

I like to keep updated about particle physics and LHC things, to quite an advanced level. My interest is in fields and their previous engineering value in radio waves and electronics in general. It makes sense to move to a tensor algebra in the 2+1 charge space, just as was done for the theory of gravitation. In some sense the conservation of acceleration becomes a conservation of net mapped curvature and it becomes funny via Noether’s Theorem.

CP violation as a horizon delta of radius of curvature from the “t” distance is perhaps relevant phrased as a moment of inertia in the 2+1, and its resultant geometric singular forms. This does create the idea of singular forms in the 2+1 space orbiting (or perhaps more correctly resonating) in tune with singularities in the 3+1 space. This interconnection entanglement, or something similar is perhaps connected to the “weak phase”.

So a 7D total space-time, with differing invariants in the 3D and 4D parts. The interesting thing from my prospective is the prediction of a heavy graviton, and conservation of acceleration. The idea that space itself holds its own shape without graviton interaction, and so conserves acceleration, while the heavy graviton can be a short range force which changes the curvature. The graviton then becomes a mediator of jerk and not acceleration. The graviton, being heavy would also travel slower than light. Gravity waves would then not necessarily need graviton exchange.

Quantization of theories has I think in many ways gone too far. I think the big breaks of the 21st century will be turning quantized bulk statistics into unquantized statistics, with quantization applied to only some aspects of theories. The implication is that dark matter is bent spacetime, without matter being present to emit gravitons. In this sense I predict it is not particulate.

So 7D and a differential phase space coordinate for each D (except time) gives a 13D reality. The following is an interesting equation I arrived at at one point for velocity solutions to uncertainty. I did not incorporate electromagnetism, but it’s interesting in the number of solutions, or superposition of velocity states as it were. The w being constant in the assumption, but purtubative expansion in it may be interesting. The units of the equation are conveniently force. A particle observing another particle would also be moving such, and the non linear summation for the lab rest frame of explanation might be quite interesting.

(v^2) v ‘ ‘ ‘−9v v ‘ v ‘ ‘+12(v ‘ ^3)+(1−v^2/c^2)v ‘ (wv)^2=0

With ‘ representing differential w.r.t. time notation. So v’ is acceleration and v” is the jerk. I think v”’ is called the jounce for those with a mind to learn all the Js. An interesting equation considering the whole concept of uncertain geometry started from an observation that relative mass was kind of an invariant, mass oscillation, although weird with RMS mass and RMS energy conservation, was perhaps a good way of parameterizing an uncertainty “force” proportional to the kinetic energy momentum product. As an addition it was more commutative as a tensor algebra. Some other work I calculated suggests dark energy is conservation of mass times log of normalized velocity, and dark matter could be conserved acceleration with gravity and the graviton operating to not bend space on density, but bend space through a short distance acting heavy graviton. Changes in gravity could thus travel slower than light, and an integral with a partial fourth power fraction could expand into conserved acceleration, energy, momentum and mass information velocity (dark energy) with perhaps another form of Higgs, and an uncertainty boson (spin 1) as well.

So really a 13D geometry. Each velocity state in the above mass independent free space equation above is an indication of a particle of differing mass. A particle count based on solutions. 6 quarks and all. An actual explanation for the three flavours of matter? So assuming an approximate linear superposable solution with 3 constants of integration, this gives 6 parameterized solutions from the first term via 3 constants and the square being rooted, The second tern involves just 2 of the constants for 2 possible offsets, and the third term involves just one of the constants, but 3 roots with two being in complex conjugation. The final term involves just one of the constants, but an approximation to the fourth power for 4 roots, and disappearing when the velocity is the speed of light, and so is likely a rest mass term.

So that would likely be a fermion list. A boson list would be in the boundaries at the discontinuities between those solutions, with the effective mass of the boson controlled by the expected life time between the states, and the state energy mismatch. Also of importance is how the equation translates to 4D, 3D spacetime, and the normalized rotational invariants of EM and other things. Angular momentum is conserved and constant (dimensionless in uncertain geometry),

Assuming the first 3 terms are very small compared to the last term, and v is not the speed of light. There would have to be some imaginary component to velocity, and this imaginary would be one of the degrees of freedom (leading to a total of 26). Is this imaginary velocity consistent with isospin?

Yang–Mills Existence and Mass Gap (Clay Problem)

If mass oscillation is proved to exist, then the mass gap can never be proved to be greater than zero as the mass must pass through zero for oscillation. This does exclude the possibility of complex mass oscillation, but this is just mass shrinkage (no eventual gap in the infinite time limit), or mass growth, and hence no minimum except in the big bang.

The 24 degrees of freedom on the relativistic compacted holographic 3D for the 26D string model, imply with elliptic functions, a 44 fold way. This is a decomposition into 26 sporadic elliptic patterns, and 18 generational spectra patterns. With the differential equation above providing 6*2*(2+1) combinations from the first three terms, and the 3 constants of integration locating in “colour space” through a different orthogonal basis. Would provide 24 apparent solution types, with 12 of them having a complex conjugation relation as a pair for 36. If this is the isospin solution, then the 12 fermionic solutions have all been found. That leaves the 12 bosonic solutions (the ones without a conjugate in the 3rd term generative), with only 5 (or if a photon is special 4) having been found so far. If the bosonic sector includes the dual rooting via the second term for spin polarity, then of the six (with the dual degenerates cancelled), two more are left to be found if light is special in the 4th term.

This would also leave 8 of the 44 way in a non existent capacity. I’d maybe focus on them being gluons, and consider the third still to be found as a second form of Higgs. OK.

Displacement Currents in Colour Space

Maybe an interesting wave induction effect is possible. I’m not sure what the transmitter should be made of. The ABC modulation may make it a bit “alternate” near the field emission. So not caused by bosons in the regular sense, more the “transition bosons” between particle states. The specific transitions between energy states may (although it’s not certain), pull the local ABC field in a resonant or engineered direction. The actual ABC solution of this reality has to have some reasoning for being stable for long enough. This does not imply though that no other ABC solutions act in parallel, or are not obtainable via some engineering means.

VS Code and Elm

I’ve been looking into doing JS trans-compiled languages recently. The usual suspects popped up. ClosureScript, Elm, TypeScript, and maybe a few others. This had the unexpected effect of needing the VS Code software as some of the plugins do not yet work with VS Community. I opened up some TypeScript I had wrote recently, and found the way “require” is used for loading is not recognised in VS Code. Strange, and it might cause problems with passing on code to others.

I looked into Elm which is a Haskell for JS. It looks quite good, and I’ve downloaded the kit. I’ll let you know if I start using it big. Closure is Scheme or LISP almost. It seems to have little editor support compared to Elm, and I’d prefer to use Elm over Closure. I already have some libraries downloaded for functional extensions to JS, and some .d.ts descriptors too for some. The main reason I’ve never used Haskell is the large GHC binary size. The idea of using JS as the VM is good. It does however dump about 6000 lines of JS code for a hello world. I haven’t tested if this is per module. I understand Elm can do very fast HTML rendering though, so something to look into.

There’s also Haxe of course, and plenty of plain vanilla JS functional programming modules, including some like RQ, for threading control. Some nice Monad libraries, and good browser support. I also like the TinyMCE. It’s quite a classic. For the toolkit, Bootstrap.js is the current best with all the needed features of a modern looking site.

Beware much ado about category theory, and things like the continuation monad can do all sequential processing … of course from the context of writing it in a sequential language … blah, blah, stored state, pretend there’s none, blah, blah, monad, blah, delay output by wrapper, blah. Ok, well it is true I’m 46, but you young coders out there should take some of the symbology with quite a big pinch of salt, and maybe have a more interesting look at things like the Y combinator. It’s kind of what Mathematica would call Hold[] but with more monad blah for what is really group theory.

TypeScript in Visual Studio Community 2017

Just loaded up the VS 2017 community release. The TS 2.1 features include strong types relating to null and undefined. A bit annoying, but none the less it does force some decent console logging of salient error potential. I’ve found that many apparent error conditions are being removed. If only I could find a way to stop the coercion in JS, such that it is. It is one of the things I never liked about JS. I’d prefer an explicit cast every time.