9+4+1+1+3*4=27 and a 9th Gluon for 26 Not

It still comes to mind that the “Tits Group gluon” might be a real thing, as although there seem to be eight, the ninth one is in the symmetry of self attraction, perhaps causing a shift in the physical inertia from a predicted instead of filled in constant of nature.

There would appear to be only two types of self dual coloured gluons needed in the strong nuclear force. As though the cube roots of unity were entering into the complex analysis that is within the equations of the universe.

9+4+1+1+3*4=27

The 3*4 is the fermionic 12 while the relativistic observational deviation from the abstract conceptual observation frame versus the actual moving observation particle provides for the cyclotomic 9+4+1+1 = 15 one of which is not existential within itself but just kind of a sub factor of one of the other essentials. Also it does point out a 3*5 that may also be somewhere.

Given the Tits Gluon, the number of bosons would be 14, which removing 8 for gluons, leaves 6, and removing 4 for the electrowek boson set would leave 2, and removing the Higgs, would leave 1 boson left to discover for that amount of complexity in the bosonic cyclotomic groups.

The fantastic implications of the 26 group of particles and the undelying fundementals which lead to strong complex rooted pairs, and leptonic pair set separation. Well, that’s another future.

Roll on the Plankon as good a name as any. The extension of any GUT beyond it would either be some higher bosonic cyclotomy or a higher order effect of fermions leading to deviation from Heisenburg uncertainty.

Up Charm Top
Down Strange Bottom
Electron Muon Tau
E Neutrino M Neutrino T Neutrino
H Photon W+
? Z0 W
Gluon Gluon Gluon
Gluon Tits Gluon Gluon
Gluon Gluon Gluon

Dimensions of Manifolds

The Lorentz manifold is 7 dimensional with 3 space like 1 time like and 3 velocity like, while the other connected manifold is 2 space like 1 time like, 2 velocity like and a dimensionless “unitless” dimension. So the 6 dimensional “charge” manifold has properties of perhaps 2 toroids and 2 closed path lines in a topological product.

Metres to the 4th power per second. Rate of change of a 4D spacial object perhaps. The Lorentz manifold having a similar metres to the 6th power per second squared measure of dimensional product. Or area per kilogram and area per kilogram squared respectively. This links well with the idea of an invariant gravitational constant for a dimensionless “force” measure, and a mass “charge” in the non Lorentz manifold of root kilogram.

Root seconds per metre? Would this  be the Uncertain Geometry secondary “quantum mass per length field” and the “relativistic invariant Newtonian mass per length field”. To put it another way the constant G maps the kg squared per unit area into a force, but the dimensionless quantity (not in units of force) becomes a projector through the dimensionless to force map.

GF*GD = G and only GF is responsible for mapping to units of force with relativistic corrections. GD maps to a dimensionless quantity and hence would be invariant. In the non Lorentz manifold the GMM/r^2 eqivalent would have in units of root kilogram ((root seconds) per metre), and GD would have different units too. Another option is for M to be quantized and of the form GM/r^2 as both the “charge” masses could be the same quantized quantity.

The reason the second way is more inconsistent with the the use of the product of field energies as the linear projection of force would give an M^2 over an r^2, and it would remove some logical mappings or symmetries. In terms of moment of inertia thinking, GMM/Mr^2 springs to mind, but has little form beyond an extra idea to test out the maths with.

W Baryogenisis Asymmetrical Charge

The split of W plus and minus into separate particle slots takes the idea that the charge mass asymmetry between electrons and protons can come from a tiny mass half life asymmetry. Charge cancellation of antiparticle WW pairs may still hold but momentum cancellation does not have to be exact, leading to a net dielectric momentum. Who knows an experiment to test this? A slight induced photon to Z imbalance on the charge gradient, with a neutrino emission. The cause of the W plus to minus mass ratio being a consequence of the sporadic group orders and some twist in very taught space versus some not as taught space or dimensionless expression of a symmetrically broken balance of exacts.

The observation of a dimensionless “unitless*” dimension being invariant to spacetime and mass density dilation. My brain is doing a little parallel axis theorem on the side, and saying 3D conservation of energy as an emerging construction with torsion being a dialative observable in taught spacetime.

Recent experiment of inertia of spin in neutrons provides a wave induction mechanism. Amplified remote observation of non EM radio maybe possible. Lenz’s law of counter EM cancellation may not apply. It is interesting. Mass aperture flux density per bit might be ok depending on S/N ratio. That reminds me of nV/root Hz. So root seconds is per root Hz, and nV or scaled Volts is Joules per mol charge, Z integer scale */ Joules, or just Joules or in Uncertain Geometry house units Hz. So Hz per root Hz, or just root Hz (per mol).

So root seconds per metre is per root Hz metre. As the “kilogram equivalent but for a kind of hypercharge” in the non Lorentz manifold perhaps. The equivalent of GD (HD) projecting the invariant to an actual force. By moving the dialative into GF and HF use can be made of invariant analysis. Mols per root Hz metre is also a possible QH in FHI = HDQHQH/R^2 the manifold disconnect being of a radius calculated norm in nature. A “charge” in per noise energy metre?

Beyond the Particles to the 18n of Space with a Tits Connection

Why I lad, it’s sure been a beginning. The 26 sporadic groups and the Tits group as a connection to the 18n infinite families of simple groups. What is the impedance of free space (Google), and does water become an increase or decrease on that number of ohms. Inside the nature of the speed of light at a refractory boundary, what shape is the bend of a deflection and what ohm expectations on the impedance to the progress of light?

Boltzmann Statistics in the Conduction of Noise Energy as Dark Energy

Just like ohm metres is a resistivity of the medium, it’s inverse being a conductivity in the medium, a united quantity relating to “noise energy or intensity” with a metres extra maybe an area over length transform of a bulk property of a thing. The idea a “charge” can be a bulk noise conductivity makes for an interesting question or two. Is entanglement conducted? Can qubits be shielded? Can noise be removed from a volume of space?

If noise pushes on spacetime is it dark energy? Is the Tits gluon connection an axion with extras conducting into the spacetime field at a particular cycle size of the double cover of the from 18n singular group which shall be known as the flux origin. 2F4(2)′, maybe the biggest communication opportunity this side of the supermassive black hole Sagittarius A*. 

The Outer Manifold Multiverse Time Advantage Hypothesis

Assuming conductivity, and locations of the dimensionally reduced holographic manifold, plus time relativistic dilation, what is the speed of light to entanglement conduction ratio possibilities?

As noise from entanglement comes from everywhere, then any noise directionality control implies focus and control of noisy amounts from differential noise shaped sources. Information is therefore not in the bit state, but in the error spectrums of the bits.

The inner (or Lorentz) manifold is inside the horizon, and maybe the holographic principal is in error in that both manifolds project onto each other, and what is inside a growing black remains inside, and when growth happens does the outer manifold completely get pushed further out?

A note on dimensionfull invariants such as velocities is that although they are invariant they become susceptible to environmental density manipulation where as dimensionless invariants are truly invariant in that there is no metre or second that will ever alter the scalar value. For example Planck’s constant is dimensionless in Uncertain Geometry house units.

So even though the decode may take a while due to the distance of the environmental entanglement and its influence on statistics, (is it a radius or radius square effect), the isolation of transmission via a vacuum could in principal be detected. Is there a relationship between distance, time of decode for relevance of data causality?

If the spectrum of the “noise” is detectable then it must have properties different from other environmental noise, such as being the answer to a non binary question and hopefully degenerative pressure eventually forces the projection of the counter solutions in the noise, allowing detection by statistical absence.

Of course you could see it as a way of the sender just knowing what had not been received, from basic entanglement ideas, and you might be right. As the speed of temperature conduction is limited by the speed of light and non “cool packed” atomic orbital occupancy as in the bulk controlled by photon exchange and not degenerative limits imposed by Pauli exclusion. A quantum qubit system not under vacuum of cooling does not produce the right answer, but does it statistically very slightly produce it more often? Is the calculation drive of the gating applying a calculative pressure to the system of qubits, such that other qubits under a different calculation pressure can either unite or compete for the honour of the results?

Quantum noise plus thermal noise equals noise? 1/f? Shott noise for example is due to carrier conduction in PN junction semiconductors, in some instances. It could be considered a kind of particle observation by the current in the junction which gets (or can be) amplified. I’m not sure if it is independent of temperature in a limited (non plasma like) range, but it is not thermal noise.

The Lambda Outer Manifold Energy in a More General Relativity

The (inner of the horizon) manifold described by GR has a cosmological constant option associated with it. This could be filled by the “gravitation of quantum noise conduction” symmetrical outer manifold isomorphic field with a multiplicative force (dark energy?) Such that the total when viewed in an invariant force measure picture is not complicated by the horizon singularities of the infinities from division by zero. Most notably the Lorentz contraction of the outer manifold as it passed through the horizon on expansion or contraction of the radius.

The radius itself not being invariant can not be cast to other observers to make sense, only calculated invariants (and I’d go as far to say dimensionless invariants) have the required properties to be shared (or just agreed) between observers without questions of relativistic reshaping. Communication does not have to happen to agree on this knowledge of the entangled dimensionless measure.

CMB Focus History

With the CMB assume a temperature bends due to density and distance from a pixel as a back step in time then becomes a new picture with its own fluctuations in density and hence bend to sum an action on a pixel for a earlier accumulation over pixels drifting to a bent velocity. Motion in the direction of heat moved further back in time. Anything good show up? Does the moment weight of other things beside an inverse square bend look a little different?

So as the transparency emission happened over a time interval, the mass should allow a kind of focus back until the opacity happens. Then that is not so much as a problem as it appears or not, as it is a fractional absorbtion ratio, and the transparency balance passed or crosses through zero on an extrapolation of the expectation of continuation.

Then there maybe further crossings back as the down conversion of the red shift converts ulta gamma into the microwave band and lower. The fact the the IF stage of the CMB reciever has a frequency response curve and that a redshift function maybe defined by a function in variables might make for an interesting application of an end point integral as the swapping of a series in dx (Simpson’s rule) becomes a series in differentials of the function but with an exponetial kind of weighting better applicable to series acceleration.

Looking back via a kind of differential calculus induction of function, right back, and back. The size of the observation appature will greatly assist, as would effective interpolation in the size of the image with some knowledge of general relativity and 3D distance of the source of the CMB.

To the Manifold and Beyond

Always fun to end with a few jokes so the one about messing with your experiment from here in multiple ways, and taking one way home and not telling you if I switched it off seems a good one. There are likely more, but today has much thought in it, and there is quite a lot I can’t do. I can only suggest CERN keep the W+ and W- events in different buckets on the “half spin anti-matter opposite charge symmetry, full spin boson anti-matter same charge symmetry as could just be any” and “I wonder if the aliens in the outer universe drew a god on the outside of the black hole just for giggles.”

 

Differential Modulation So Far

Consider the mapping x(t+1) = k.x(t).(1-x(t)) made famous in chaos mathematics. Given a suitable set of values of k for each of the symbols to be represented on the stream, preferably of a size which produces a chaotic sequence. The sequence can be map stretched to encompass the transmission range of the signal swing.

Knowing that the initial state is represented with an exact precision, and that all calculations are performed using deterministic arithmetic with rounding, then it becomes obvious that for a given transmit precision, it becomes possible to recover some pre-reception transmission by infering the preceeding chaotic sequence.

The calculation involved for maximal likelyhood would be involved and extensive to obtain a “lock”, but after lock the calculation overhead would go down, and just assist in a form of error correction. In terms of noise immunity this would be a reasonable modulation as the past estimation would become more accurate given reception time and higher knowledge of the sequence and its meaning and scope of sense in decode.

Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

A New Paper on Computation and Application

https://www.amazon.co.uk/Pipeline-Cache-Big-RISC-Computational-ebook/dp/B07XY9RSHH/ref=sr_1_1?keywords=pipeline+cache+big+risc&qid=1568807888&sr=8-1 is a nice paper on some computation issues, and eventually covers some politics and vitamin biochemistry. Not a fan? Still letting your biome let you shout at the bad people not feeding your hunger?

Shovel in the gammon all you want, and load it up with chips as a little survivor from ancient times takes advantage of the modern high carb diet and digs a hole for you.

Calculus

I don’t always get it wrong.

So it becomes a determined process to integrate. And as the two forms of integration closure are known, the process can be extended as any integration has closed form if the series converge. Integration by parts to a series. So why? The end points can have good integral estimates, and many in-between values of the function do not need evaluation. Series acceleration should be enough. Imagine an integral from zero to (m to power a times n to power b) which equals m times n. If for some a not equal b, the factor of m or n becomes obvious? The calculation would be log of the upper limit in polytime, not linear.

The previous page was:

Think about the f+c as integral of f plus a rectangle making f always positive when offset by c to give defined sign and hence binary search opportunity.

It wasn’t specifically developed to crack public key things, and the motivation was for simplified solutions to differential equations. Anyone who’s done DE solving knows the problem with them. That problem is integration and closing it to be algorithmic is a useful thing. That kind of leaves the Lambert W kind of collection of variables problem for real analytical DEs. Good.

It also sets a complexity limit on integration in terms of an analytic function and series of differential orders. The try a power series multiplied by ln x is seen as good advice, but lacking. Hypergeometric series can be reseen as useful to approach the series of this closure. It maybe helpful to decompose these closures into more fundamental sums of new special operators. And do some cancellation. If you find yourself pedantic about dx or plus C, then might I suggest you forget it and blunder on.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Kindle Fire (Pt. III)

A general complaint about Android devices is that when you’re low on power, and it always wants to switch on and waste it rather than wait until you press the on button. It’s part of the global always on spy network, designed for idiots with money and not for intelligent or off-grid people. Alexa likely wants to know your inside leg measurement. As I said this is general to all Android devices, so I suppose expecting more from Amazon was just too much.

I suppose it would be too much to edit things like the above equation on the device, but I will try to see if there is such an equation editing tool. Plenty of good calculators, but few typographical tools. I sometimes would like to do this. It’s not as though I need the mathematical assistance, more typographical layout, for including in documents.

It seems there is nothing which will do this offline. Maybe an app opportunity? Likely a long development. It depends on other tools such as MathML being hack-able into something else. Of course n=k in the above equation. A bit of maths in the “analytic closure of integration” to make it a deterministic process for a CAS (Computer Algebra System). It replaces integration (hard for computers to pattern match, and based on a large and incomplete knowledge base) with simultaneous equations and factorization.

There seem to be some downloaded episodes of some series happened this morning. Three free episodes (Number 1) of some random TV shows. I assume this is to get people into watching exciting stuff. I feel a bandwidth suck in the making. Ah, so it’s called “On Deck“, and although kind of interesting, it would be nice to make it only use certain WiFi networks. While on 4G hotspot proxy, it will make my bank account sad.

GEM Unification

The further result of adding in Coulomb force gradients into the theory of Uncertain Geometry. The GEM (Geometry/Gravity and Electro-Magnetism) Unification hints at the above table of particles. A mass genera of “Dark” matter (B), and some strange matter (A). The paper so far can be downloaded from Google Drive. I’m currently on the search for a suitable equation relating to the Weak force. I have no proof yet that it would be emergent, but the particle grid already includes a “dark matter” column (including a dark neutrino (yellow)), and a “not so dark” but very strange and heavy particles type A.

MaxBLEP Audio DSP

TYPE void DEF blep(int port, float value, bool limit) SUB
	//limit line level
	if(limit) value = clip(value);
	//blep fractal process residual buffer and blep summation buffer
	float v = value;
	value = blb[port] - value - bl[((idx) & 15) + 32 * port + 16];//and + residual
	blb[port] = v;//for next delta
	for(int i = 0; i < 15; i++) {
		bl[((i + idx + 1) & 15) + 32 * port] += value * blepFront[i];
	}
	value += bl[((idx) & 15) + 32 * port];//blep
	float r = value - (float)((int16_t)(value * MAXINT)) / (float)MAXINT;//under bits residual
	bl[((idx) & 15) + 32 * port + 16] = value * (blepFront[15] - 1.0);//residual buffer
	bl[((idx + 1) & 15) + 32 * port] += r;//noise shape
	idx++;
	//hard out
	_OUT(port, value - r);//start the blep
RETURN

Yes an infinite zero crossing BLEP. … Finance and the BLEP reduced noise of micro transactions

Block Tree Topological Proof of Work

Given that a blockchain has a limited entry rate on the chain due to the block uniqueness constraint. A more logical mass blocking system would used a tree graph, to place many leaf blocks on the tree at once. This can be done by assigning the fold of the leading edge of the tree onto random previous blocks, to achieve a number of virtual pointer rings, setting a joined pair of blocks as a new node in a Euler number mapping to a competition on genus and closure of the tree head leaf list to match block use demand.

The coin as it were, is the genus topology, with weighted construction ownership of node value. The data deciding part selection of the tree leaf node loop back pointers. The random, allowing a spread of topological properties in the proof of work space.

A Modified ElGamal for Passwords Only

It occurred to me g does not need to be made public for ElGamal signing, if the value g^H(m) is stored as the password hash, generated by the client. Also (r, s) can be changed to (r, r^s) to reduce server verification load to one mod power and one precision multiply mod p, and a subtraction equality test. So on the creation of a new password (y, p, g^H(m)) is created, and each log in needs the client to generate a k value to make (r, r^s).

Password recovery would be a little complex, and involve some email backdoor based on maybe using x as a pseudo H(m), and verifying the changes via generation of y. This would of course only set the local browser to have a new password. So maybe a unique (y, p, g^H(m)) per browser local store used. Index the local storage via email address, and Bob’s yer been here before.

Also, the server can crypt any pending view using H(m) as a person’s private key, or the private key as a browser specific personal private key, or maybe even browser key with all clients using same local store x value. All using DH shared secrets. This keeps data in a database a bit more private, and sometimes encrypt to self might be useful.

Is s=H(m)(1-r)(k^-1) mod (p-1) an option? As this sets H(m)=x, eliminating another y, making (p, g^H(m)) sufficient for authentication server storage, and g is only needed if the server needs to send crypts. Along with r=g^k mod p, as some easy sign. (r, s) might have to be used, as r^s could be equated as modinverse(r) for an easy g^H(m) equality, and the requirement to calculate s from r^s is a challenge. So a secure version is not quite as server efficient.

In reality k also has to be computed to prevent (r, s) reuse. This requires the k choice is the servers. Sending k in plaintext defeats the security, so g is needed, to calculate g^z, and so g^(H(m))^z=k on both sides. A retry randomizer to hide s=0, and a protocol is possible.

This surpasses a server md5 of the password. If the md5 is client side, a server capture can log in. If the md5 is server side, the transit intercept is … but a server DB compromise also needs a web server compromise. This algorithm also needs a client side compromise, or email intercept as per.

The reuse of (r, s) can’t be prevented without knowing k, and hence H(m), therefore a shared secret as a returned value implies H(m) knowledge. So one mod power client side, and two server side.

g^k to client.
(g^k)^H(m) to server.
(g^H(m))^k = (g^k)^H(m) tests true.

Signatures are useless as challenge responses. The RSA version would have to involve a signature on H(m) and so need H(m) direct. Also, the function H can be quite interesting to study. The application of client side salt also is not needed on the server side as a decode key, and so not decoded there. DH is so cool like that. And (p-1) having a large factor is easy to arrange in the key generation. And write access is harder, most of the time, to obtain for data.

The storing of a crypt with the g^k used, locks it for H(m) keyed access. This could void data on a password reset, or a browser local storage reset, but does prevent some client’s data leak opertunities, such as DB decrypt keys. This would have multiple crypts of the symmetric key for shared data, but would this significantly reduce the shared key security? It would prevent new users accessing the said secured data without cracking the shared key. A locked share for private threads say?

Spamming your friends with g^salt and g^salt^H(m)?

The first one is a good idea, the second not so much. AI spam encoding g^salt to your and friends accounts. The critical thing is the friend doesn’t get the password. Assuming a bad friend, who registers and gets g^salt to activate, from their own chosen spoof password. An email does get sent to your email, to cancel the friend as an option, and no other problem exists excepting login to a primary mail account. As a spoof maybe would see the option to remove you from your own account.

The primary control email account would then need secondary authentication. Such as only see the spam folder, and know what to open first and in order. For password recovery, this would be ok. For initial registration, it would be first come first served anyhow.

Sallen-Key ZDF design

As part of the VST I am producing, I have designed an SK filter analog where the loading of the first stage by the second is removed to ease implementation. This only affects the filter Q which then has an easy translation of the poles to compensate. Implementing it as CR filter simulation reduces the basic calculation. This is then expanded on by a Zero Delay design, to better its performance.

ZDF filters rely on making a better integral estimate of the voltage over the sample interval to better calculate the linear current charge delta voltage. More of a trapezoid integration than a sum of rectangles. There is still some non-linear charge effects as the voltage affects the current. The current sample out now not known, just then needs a collection of terms to solve for it. Given a high enough sample rate, the error of linearity is small. Smaller than without it, and the phase response is flat due to the error being symmetric on the simulated capacitor voltage, and drive, and not just the capacitor voltage.

The frequency to the correct resistive constant is a good match, and any further error is equivalent to a high frequency gain reduction. There is a maximum frequency of stability introduced in some filters, but this is not one of those. Stability increases with ZDF. The double pole iteration is best done by considering x+dx terms and shifting the dx calculation till later. Almost the output of pole 1 is used to calculate most of the output of pole 2 multiplied by a factor, added on to pole 1 result, and pole 2 result then finally divided. These dx are then added to make the final outputs to memorize.

Cryptography





pub 4096R/8E2EAD58 2017-07-30 Simon Jackson

MIT key server

I’ve been looking into cryptography today and have developed a quartet filter of Java classes which do Diffe-Hellmann 2048 AES key transfer with AES encryption, and ElGamal signing. I chose not to use the shared secret method which uses both private keys, but went for a single secret symmetric AES version, with no back communication.

The main issues were with the signature fail stream close handling, to avoid data corruption via pre-verified data being read as active. An interesting challenge it has been. Other ciphers may have been more logical to some people, and doing the DH modPow by explicit coding was good for the code soul.

I think the discrete logarithm problem is quite secure, and has the square order of 1 prime versus the RSA 2 primes for the same key length. The elliptic curve methods are supposed harder, but the key topology has perhaps some backdoors deep in some later maths. The AES 128 has the lowest key complexity, and is the weakness in the scheme as wrote, so an interleave was made.

Java does make it difficult to build a standard enhanced symmetric cipher to fix this key short fall. Not impossible, but difficult. I may add an intermediate permutation filter to expand the symmetric key length. In the end, I decided on a split symmetric 256 key for AES, one for an outer ECB, and one for an inner CBC. The 16 byte IV was used as a step offset between them for a good 256 bit key effective.

The DH 2048 does not do key exchange with a common p or g, as this is what leads to the x collision over the same p problem. The original plan of public key with less exchange of ephemeral keys, is better. The time solve complexity is similar to a similar RSA. EC cryptography is cool, but still a little not understood, which is ironic for a mathematical field, with a little too much “under” information on how maybe to “find” holes.

The whole concept of perfect forward security, moves the game on to AES cracks based on initial stream content estimates. I’d suggest most of the original key exchange space is pre-computed for a simple 128 bit symmetric crack by now. Out of all the built in Java key types, DH is from my point of view the best for public key cryptography. RSA is cool too for sure, but division is a “relatively” simple operation. There is estimated a 20 bit advantage in the descrete log problem.

DH keys can be decoded to do ElGamal and basic public key secret generation. I’m not sure if DSA as an alternative just needs an extra factor, but Pollard rho triggers a future co p, q effect might be possible. P and Q in DSA are not independent. one is a multiple of the other almost …

Welcome to the national insecurity bank robbery. I know, the state via an affiliated plc, stole 1/4 of my income last year by getting me to destroy evidence.

The artificial limits on the key length and problems leading from that are in the JDK source. Also the deletion of keys from the memory pages when freed back to the OS, may be a problem. Quite a nice programming challenge to do. The Java libs have some strange restrictions on g. View the source.

/* Diffe-Hellmann Cipher AES. (C)2017 K Ring Technologies Ltd.
 A DH symmetric secret (1024 bit) for a 2* AES 128 (256 bit) interleave.
 The 16 byte offset interleave of the ECB is used for the IV slot
 of the CBC.
 */
package uk.co.kring.net;

import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.math.BigInteger;
import java.math.SecureBigInteger;
import java.security.KeyPair;
import java.security.PublicKey;
import java.util.Arrays;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.CipherOutputStream;
import javax.crypto.interfaces.DHPrivateKey;
import javax.crypto.interfaces.DHPublicKey;
import javax.crypto.spec.IvParameterSpec;

/**
 *
 * @author Simon
 */
public final class DHCipher {
    
    public static final class InputStream extends FilterInputStream {

        public InputStream(java.io.InputStream in, KeyPair pub) throws Exception {
            super(in);
            BigInteger p, sk;
            SecureBigInteger x;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub.getPublic();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            DHPrivateKey m =(DHPrivateKey)pub.getPrivate();
            x = new SecureBigInteger(m.getX());
            bb = new byte[bytes];
            in.read(bb);
            sk = new BigInteger(bb);
            if(!sk.abs().equals(sk)) {
                sk = new BigInteger(asLen(bb, bb.length + 1));
            }
            sk = sk.modPow(x, p);
            x.destroy();
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[0]);
            in = new CipherInputStream(in, c);
            bb = new byte[24];
            in.read(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[1], iv);
            in = new CipherInputStream(in, c);
            int i = in.read() % 23;
            in.skip(i);
        }
    }
    
    public static byte[] asLen(byte[] b, int len) {
        byte[] q = new byte[len];
        int j;
        for(int i = len - 1; i >= 0; i--) {
            j = i + b.length - q.length;
            if(j < 0) break;
            q[i] = b[j];
        }
        return q;
    }
    
    public static final class OutputStream extends FilterOutputStream {
        
        public OutputStream(java.io.OutputStream out, PublicKey pub) throws Exception {
            super(out);
            BigInteger y, g, p, sk;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub;
            y = k.getY();
            g = k.getParams().getG();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            bb = new byte[bytes];
            Keys.getR().nextBytes(bb);
            BigInteger b = new BigInteger(bb);
            b = b.abs();
            bb = g.modPow(b, p).toByteArray();
            bb = asLen(bb, bytes);
            out.write(bb);//ephermeric
            sk = y.modPow(b, p);
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[0]);
            out = new CipherOutputStream(out, c);
            bb = new byte[24];
            Keys.getR().nextBytes(bb);
            out.write(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[1], iv);
            out = new CipherOutputStream(out, c);
            Keys.getR().nextBytes(bb);
            out.write((byte)(bb[0] % 23 + 23 * bb[23]));
            out.write(bb, 1, bb[0] % 23);
        }
    }
}

And the following code for clearing the key. Perfect forward security requires the same q to be used and a extra negotiation step. As it stands it’s not perfect, but as good as RSA, maybe slightly better.

/* Useful. (C)2017 K Ring Technologies Ltd.
 */
package java.math;

import java.util.Arrays;
import java.util.Vector;
import javax.security.auth.DestroyFailedException;
import javax.security.auth.Destroyable;
import uk.co.kring.net.Keys;

/**
 *
 * @author Simon
 */
public final class SecureBigInteger extends BigInteger implements Destroyable {
    
    private boolean d = false;
    private BigInteger ref;
    private static final Vector<SecureBigInteger> m = new Vector<SecureBigInteger>();
    
    private synchronized void handler(BigInteger val) {
        ref = val;
        m.add(this);
        System.arraycopy(val.mag, 0, mag, 0, mag.length);
    }
    
    public SecureBigInteger(BigInteger val) throws Exception {
        super(val.bitLength(), 1, Keys.getR());
        handler(val);
    }

    @Override
    public boolean isDestroyed() {
        return d;
    }

    @Override
    public void destroy() throws DestroyFailedException {
        Arrays.fill(mag, -1);
        m.remove(this);
        boolean in = false;
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            if(((SecureBigInteger)x).ref == ref) in = true;
        }
        if(!in) Arrays.fill(ref.mag, -1);//clear final instance
        d = true;
    }
    
    public void masterDestroy() throws DestroyFailedException {
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            ((SecureBigInteger)x).destroy();
        }
    }
}

There is also the possibility of G exchange, which would allow for calculation of new Y. This would have advantages of instancing a public key set, based on a 1 to 1 crypt role. The cracking of any public key thus only cracks one link and not the full set of peers to a node. In reality, g just alters y, and p does change the crypt. So an exchange of new y is required. There is a potential flaw in this swap if the new p and g are chosen in a cracked domain.

Allowing the client to select g in the server selected p domain is a minor concession to duplication, the server would have to return a new y. Such a thing might go DOS attack, and so should be restricted somewhat. If g is high in repeated factors, then the private key is effectively multiplied up and reduced mod p-1, and g is reduced to a lower base.

Disection of the Roots of the Mass Independent Space Equation

(v^2) v ‘ ‘ ‘ −9v v ‘ v ‘ ‘ 12(v ‘ ^3) (1−v^2/c^2)v ‘ (wv)^2
3 Constants 2 Constants 1 Constant 1 Constant
Square Power Linear Power Cubic Power Square and Quartic Power
3 Root Pairs 2 Roots 1 Root and 1 Root Pair 1 Root Pair and 2 Root Pairs
Energy and Force of Force Momentum, Force and Velocity of Force Cube of Force Force Energy
Potential Inertial Term Kinetic Inertial Term Strong Term Relativistic Force Energy Coupling
Gravity Dark Strong Weak EM

The fact there are 4 connected modes, as it were, imply there are 6 cross overs between modes of action, indicating that one term can be stimulated to convert into another term. The exact equilibrium points can be set as 6 differential equation forms, with some further analysis required of stable phase space bounds, and unstable phases at which to alter the balance to have a particular effect. Installing a constant (or function) of proportionality in each of the following balance equations would in effect allow some translation of one term ‘resonance’ into another.

v v ‘ ‘ ‘=−9 v ‘ v ‘ ‘ 3 Const and 1 root point
(v^2) v ‘ ‘ ‘=12(v ‘ ^3) 3 Const and 6 root points
v ‘ ‘ ‘=(1−v^2/c^2)v ‘ w^2 3 Const and 2 root points
−9v v ‘ ‘=12(v ‘ ^2) 2 Const and 2 root points
−9 v ‘ ‘=(1−v^2/c^2)(w^2) v 2 Const and 2 root points
12(v ‘ ^2)=(1−v^2/c^2)(wv)^2 1 Const and 12 root points

Another interesting point is 3 of the 6 are independent of w (omega mass oscillation frequency), and also by implication relativistic dependence on c.




The 3D Flavour Tensor in Analogue to the 4D of Einstein, for a 3D, 4D Curvature in Particle Physics

I like to keep updated about particle physics and LHC things, to quite an advanced level. My interest is in fields and their previous engineering value in radio waves and electronics in general. It makes sense to move to a tensor algebra in the 2+1 charge space, just as was done for the theory of gravitation. In some sense the conservation of acceleration becomes a conservation of net mapped curvature and it becomes funny via Noether’s Theorem.

CP violation as a horizon delta of radius of curvature from the “t” distance is perhaps relevant phrased as a moment of inertia in the 2+1, and its resultant geometric singular forms. This does create the idea of singular forms in the 2+1 space orbiting (or perhaps more correctly resonating) in tune with singularities in the 3+1 space. This interconnection entanglement, or something similar is perhaps connected to the “weak phase”.

So a 7D total space-time, with differing invariants in the 3D and 4D parts. The interesting thing from my prospective is the prediction of a heavy graviton, and conservation of acceleration. The idea that space itself holds its own shape without graviton interaction, and so conserves acceleration, while the heavy graviton can be a short range force which changes the curvature. The graviton then becomes a mediator of jerk and not acceleration. The graviton, being heavy would also travel slower than light. Gravity waves would then not necessarily need graviton exchange.

Quantization of theories has I think in many ways gone too far. I think the big breaks of the 21st century will be turning quantized bulk statistics into unquantized statistics, with quantization applied to only some aspects of theories. The implication is that dark matter is bent spacetime, without matter being present to emit gravitons. In this sense I predict it is not particulate.

So 7D and a differential phase space coordinate for each D (except time) gives a 13D reality. The following is an interesting equation I arrived at at one point for velocity solutions to uncertainty. I did not incorporate electromagnetism, but it’s interesting in the number of solutions, or superposition of velocity states as it were. The w being constant in the assumption, but purtubative expansion in it may be interesting. The units of the equation are conveniently force. A particle observing another particle would also be moving such, and the non linear summation for the lab rest frame of explanation might be quite interesting.

(v^2) v ‘ ‘ ‘−9v v ‘ v ‘ ‘+12(v ‘ ^3)+(1−v^2/c^2)v ‘ (wv)^2=0

With ‘ representing differential w.r.t. time notation. So v’ is acceleration and v” is the jerk. I think v”’ is called the jounce for those with a mind to learn all the Js. An interesting equation considering the whole concept of uncertain geometry started from an observation that relative mass was kind of an invariant, mass oscillation, although weird with RMS mass and RMS energy conservation, was perhaps a good way of parameterizing an uncertainty “force” proportional to the kinetic energy momentum product. As an addition it was more commutative as a tensor algebra. Some other work I calculated suggests dark energy is conservation of mass times log of normalized velocity, and dark matter could be conserved acceleration with gravity and the graviton operating to not bend space on density, but bend space through a short distance acting heavy graviton. Changes in gravity could thus travel slower than light, and an integral with a partial fourth power fraction could expand into conserved acceleration, energy, momentum and mass information velocity (dark energy) with perhaps another form of Higgs, and an uncertainty boson (spin 1) as well.

So really a 13D geometry. Each velocity state in the above mass independent free space equation above is an indication of a particle of differing mass. A particle count based on solutions. 6 quarks and all. An actual explanation for the three flavours of matter? So assuming an approximate linear superposable solution with 3 constants of integration, this gives 6 parameterized solutions from the first term via 3 constants and the square being rooted, The second tern involves just 2 of the constants for 2 possible offsets, and the third term involves just one of the constants, but 3 roots with two being in complex conjugation. The final term involves just one of the constants, but an approximation to the fourth power for 4 roots, and disappearing when the velocity is the speed of light, and so is likely a rest mass term.

So that would likely be a fermion list. A boson list would be in the boundaries at the discontinuities between those solutions, with the effective mass of the boson controlled by the expected life time between the states, and the state energy mismatch. Also of importance is how the equation translates to 4D, 3D spacetime, and the normalized rotational invariants of EM and other things. Angular momentum is conserved and constant (dimensionless in uncertain geometry),

Assuming the first 3 terms are very small compared to the last term, and v is not the speed of light. There would have to be some imaginary component to velocity, and this imaginary would be one of the degrees of freedom (leading to a total of 26). Is this imaginary velocity consistent with isospin?

Yang–Mills Existence and Mass Gap (Clay Problem)

If mass oscillation is proved to exist, then the mass gap can never be proved to be greater than zero as the mass must pass through zero for oscillation. This does exclude the possibility of complex mass oscillation, but this is just mass shrinkage (no eventual gap in the infinite time limit), or mass growth, and hence no minimum except in the big bang.

The 24 degrees of freedom on the relativistic compacted holographic 3D for the 26D string model, imply with elliptic functions, a 44 fold way. This is a decomposition into 26 sporadic elliptic patterns, and 18 generational spectra patterns. With the differential equation above providing 6*2*(2+1) combinations from the first three terms, and the 3 constants of integration locating in “colour space” through a different orthogonal basis. Would provide 24 apparent solution types, with 12 of them having a complex conjugation relation as a pair for 36. If this is the isospin solution, then the 12 fermionic solutions have all been found. That leaves the 12 bosonic solutions (the ones without a conjugate in the 3rd term generative), with only 5 (or if a photon is special 4) having been found so far. If the bosonic sector includes the dual rooting via the second term for spin polarity, then of the six (with the dual degenerates cancelled), two more are left to be found if light is special in the 4th term.

This would also leave 8 of the 44 way in a non existent capacity. I’d maybe focus on them being gluons, and consider the third still to be found as a second form of Higgs. OK.

Displacement Currents in Colour Space

Maybe an interesting wave induction effect is possible. I’m not sure what the transmitter should be made of. The ABC modulation may make it a bit “alternate” near the field emission. So not caused by bosons in the regular sense, more the “transition bosons” between particle states. The specific transitions between energy states may (although it’s not certain), pull the local ABC field in a resonant or engineered direction. The actual ABC solution of this reality has to have some reasoning for being stable for long enough. This does not imply though that no other ABC solutions act in parallel, or are not obtainable via some engineering means.

LZW (Perhaps with Dictionary Acceleration) Dictionaries in O(m) Memory

Referring to a previous hybrid BWT/LZW compression method I have devised, the dictionary of the LZW can be stored in chain linked fixed size structure arrays one character (the symbol end) back linking to the first character through a chain. This makes efficient symbol indexing based on number, and with the slight addition of two extra pointers, a set of B-trees can be built separated by symbol length to also be loaded in inside parallel arrays for fast incremental finding of the existence of a symbol. A 16 bucket move to front hash table could also be used instead of a B-tree, depending on the trade off between memory of a 2 pointer B-tree, or a 1 pointer MTF collision hash chain.

On the nature of the BWT size, and the efficiency. Using the same LZW dictionary across multiple BWT blocks with the same suffix start character is effective with a minor edge effect, rapidly reducing in percentage as the block size increases. An interleave reordering such that the suffix start character is the primary group by of linearity, assists in the scan for serachability. The fact that a search can be rephrased as a join on various character pairings, the minimal character pair can be scanned up first, and “joined” to the end of the searched for string, and then joined to the beginning in a reverse search, to then pull all the matches sequentially.

Finding the suffixes in the LZW structure is relatively easy to produce symbol codes, to find the associated set of prefixes and infixes is a little more complex. A mostly constant search string can be effectively compiled and searched. A suitable secondary index extension mapping symbol sequences to “atomic” character sequences can be constructed to assist in the transform of characters to symbol dictionary index code tuples. This is a second level table in effect, which can be also compressed for atom specific search optimization without the LZW dictionary loading without find.

The fact the BWT infers an all matches sequential nature, and a second level of BWT with the dictionary index codes as the alphabet could defiantly reduce the needed scan time for finding each LZW symbol index sequence. Perhaps a unified B-tree as well as the length specific B-tree within the LZW dictionary would be useful for greater and less than constraints.

As the index can become a self index, there maybe a need to represent a row number along side the entry. Multi column indexes, or primary index keys would then best be likely represented as pointer tuples, with some minor speed size data duplication in context.

An extends chain pointer and a first of extends is not required, as the next length B-tree will part index all extenders. A root pointer to the extenders and a secondary B-tree on each entry would speed finding all suffix or contained in possibilities. Of course it would be best to place these 3 extra pointers in a parallel structure so not to be data interleaved array of struct, but struct of array, when dynamic compilation of atomics is required.

The find performance will be slower than an uncompressed B-tree, but the compression is useful to save storage space. The fact that the memory is used more effectively when compression is used, can sometimes lead to improved find performance for short matches, with a high volume of matches. An inverted index can use the position index of the LZW symbol containing the preceding to reduce the size of the pointers, and the BWT locality effect can reduce the number of pointers. This is more standard, and combined with the above techniques for sub phases or super phrases should give excellent find performance. For full record recovery, the found LZW symbols only provide decoding in context, and the full BWT block has to be decoded. A special reserved LZW symbol could precede a back pointer to the beginning of the BWT block, and work as a header of the post placed char count table and BWT order count.

So finding a particular LZW symbol in a block, can be iterated over, but the difficulty in speed is when the and condition comes in on the same inverse index. The squared time performance can be reduced? Reducing the number and size of the pointers in some ways help, but it does not reduce the essential scan and match nature of the time squared process. Ordering the matching to the “find” with least number on the count makes the iteration smaller on average, as it will be the least found, and hence least joined. The limiting of the join set to LZW symbols seems like it will bloom many invalid matches to be filtered, and in essence simplistically it does. But the lowering of the domain size allows application of some more techniques.

The first fact is the LZW symbols are in a BWT block subgroup based on the following characters. Not that helpful but does allow a fast filter, and less pointers before a full inverse BWT has to be done. The second fact is that the letter pair frequency effectively replaces the count as the join order priority of the and. It is further based on the BWT block subgroup size and the LZW symbol character counts for calculation of a pre match density of a symbol, this can be effectively estimated via statistics, and does not need a fetch of the actual subgroup size. In collecting multiple “find” items correlations can also be made on the information content of each, and a correlated but rarer “find” may be possible to substitute, or add in. Any common or un correlated “find” items should be ignored. Order by does tend to ruin some optimizations.

A “find” item combination cache should be maintained based on frequency of use and execution time to rebuild result both used in the eviction strategy. This in a real sense is a truncated “and” index. Replacing order by by some other method of such as order float, such that guaranteed order is not preserved, but some semblance of polarity is run. This may also be very useful to reduce sort time, and prevent excessive activity and hence time spent when limit clauses are used. The float itself should perhaps be record linked, with an MTF kind of thing in the inverse index.

Implementation of Digital Audio filters

An interesting experience. The choice of FIR or IIR is the most primary. As the filtering is modelling classic filters, the shorter coefficient varieties of IIR are the best choice for me. The fact of an infinite impulse response is not of concern with a continuous stream of data, and coefficient rounding is not really an issue when using doubles. IIR also has the advantage of an easy Sallen-Key implementation, due to the subtraction and re-adding of the feedback component, with a very simple CR processing.

The most interesting choices are to do with the anti-alias filtering, as the interpolation filter, on up-sampling is an easy choice. As the ear is not really responsive to phase, all the effort should be on the pass band response levels, and a good stop band non response. A Legendre or Butterworth are the candidates. The concept of a characteristic sound enters the design process at this point, as the cascading of SK filter sections is conceptually useful to improve the -6 dB response at cut off. This is a trade off of 20 kHz to 22.05 kHz in the alias pass band, and greater attenuation in the above 22.05 kHz infinite stop desire. The slight greater desire of alias attenuation above pass band maximal flatness (for audio harmony) implies the Legendre filter is better for the purpose than Butterworth.

In the end, the final choice is one of convenience. and a 9th order filter was decided upon, with 4 times oversampling. The use of 4 times oversampling instead of 8 times oversampling increases the alias by an octave reduction. This fact under the assumption of at least a linear reduction in the amplitude of the frequency of the generator of an alias frequency, with frequency increase, just requires a -12 dB extra gain reduction in the alias filter for an effective equivalence to 8 times oversampling (the up to and the reflection back down to 6 + 6). The amount of GHz processing also halves. These facts then become constructive in the design, with the bulk alias close to the cut off, and the minor reflected alias-alias limit, not being too relevant to overall alias inharmonic distortion.

A triple chain of 3 pole Legendre filter sections is the decided design. The approximate -9 dB at the corner, allows for slightly shifting up the cut off and still maintaining a very effective stop band. Code reuse also aids in the I-cache usage for CPU effective use.  A single 3 pole Legendre is the interpolation up sample filter. The roll off for not using Butterworth does cut some high frequency content from the maximally flat, hence the concept of maximally flat, but it out performs a Bessel filter in this regard. It’s not as though a phasor or flanger needs to operate almost perfectly in the alias band.

Perhaps there is improvement to be made in the up sampling filter, by post up sample 88.2 kHz noise shaped injection to eliminate all error at 44.1 kHz. This may have a potential advantage to map the alias noise into the low frequencies, instead of encroaching from the higher frequencies to the lower, and for creating the alias as a reduction in signal to noise, instead of at certain inharmonic peaks. The main issue with this is the 44.1 kHz wave fundamental, seen as the amplitude ring modulation of the injected phase noise, by the 44.1 kHz stepped waveform between samples input. The 88.2 kHz “carrier” and the sidebands are higher in frequency, and of the same amplitude magnitude.

But as this is following for no 44.1 kHz error, the 88.2 kHz and sidebands are the induced noise, the magnitude of which is of the order of 1 octave up from the -3 dB roll at the corner, plus approximately the octave for a 3 pole filter, or about 36 dB cut of a signal 3/4 of the input amplitude. I’d estimate about -37 dB at 88.1 kHz, and -19 dB at 44.1 kHz. Post processing with a 9 pole filter, provides an extra -54 dB on down sampling, for an estimate of around -73 dB or greater on the noise. That would be about 12 bit resolution at 44.1 kHz increasing with frequency. All estimates, likely errors, but in general not a good idea from first principals. Given that the 44.1 kHz content would be very small though post the interpolation filter, -73 dB down from this would be good, although I don’t think achievable in a sensible manor.

Using the last filtered sample in as the reference for the present sample filtered in as a base line, the signal at 22.05 kHz would be smoothed. It would have a notch filter effect, by injecting quantization offset ringing noise at 88.2 kHz to cancel 22.05 kHz. The notch would likely extend down in frequency for maybe -6 dB at about 11 kHz. Perhaps in the end it is just better to subtract the multiplied difference between two up sample filters using different sinc spreading of a 1000 and a 1100 sample occupancy zero inter fill. Subtracting the alternates up conversion delta as it were.

There is potentially also an argument for having a second order section with damping factor near 0.68 and corner 22.05 kHz to achieve some normalisation from sinc up-sampling. This adds in an amount of Q such as to peak the filter cancelling the sinc droop, which would be about 3% at 4 times oversampling.

EDIT: Some of you may have noticed that the required frequencies for stable filtering are too high at 4 times oversampling. So unfortunate for the CPU load an 8 times oversample has to be used. The sinc error is less than 1% at this oversample, but still corrected in a similar way, and a benefit of 2 extra poles. Following this by a 0.1 dB 3 pole Chebyshev high pass which has been inverted, gives a reasonable 5 pole up sampling filter. The down sampling filter for code efficiency is a triple instance of the sample inverse Chebyshev, with the corner frequencies slightly offset to produce more individual zeros, and some spreading of the “ringing”. These 9 poles are enough to get the stop band ripple to be lower than a 16 bit resolution. Odd order inverse Chebyshev are essential for the reflected spectra to be continually decreasing in amplitude.

Power Systems

genLot of free energy videos about but does it actually work, or is it just virtual vapour wares? Here’s a highly unstable circuit I designed a few years ago. The magnetic balance is so fine, that an external field can throw the circuit into an unstable power spike. Then I went for an inductance modulation of lower scale, using a 3 phase (+++), to 1 phase (++-) arrangement, for greater stability. The difficulty with such devices is not the working, but the switch off without raising volts potential to any unlucky hands. This safety aspect is the ultimate reason of non use, and not as some suppose the disruptive effect on oil and other nuclear markets. Those markets may shrink, but will always be. The chemical industry will always have need of basic oil produce, and the lower short term profits of non burn, actually extend the future profits of chemical building. Transport is minor compared to health. The nuke industry could easily shrink, and still be big. The power waste of removing rods with 90% still effective power is a white wash of the electric power from a military objective. Reactors would be different for pure civil use.

downloadAmazing colours, but what’s it really about? The Pu problem of fast breed, and somehow there will never be less of it, just does not add up for efficient too cheap to meter power promises of not too long ago. There seems to be no real research on gamma cavity down conversion technology. I wonder how long it will be before the nova bomb. The effective slowing of light to lower than the black hole threshold, at Sun core. I think the major challenge is getting super dielectrics far enough into the Sun without melt. I suppose this is some hyperbole focus problem. One day people will understand the simple application of button technology, and the boxes will judge and provide on intent or not. It’s not like they won’t have a self interest.

Musical Research

I’ve looked into free musical software of late, after helping out setting up a PC for musical use. There’s the usual Audacity, and feature limited, but good LMMS. Then I found SuperCollider, and I am now hooked on building some new synthesis tools. I started with a basic FM synth idea, and moved on to some LFO and sequencing features. It’s very nice. There is some minor annoyance with the SC language, but it works very well, and has access to the nice Qt toolkit for making GUIs. So far I’ve implemented the controls as a MIDI continuous controller bus proxy, so that MIDI in is easy. There will be no MIDI out, as it’s not that kind of be all thing. I’ve settled on a 32 controls per MIDI channel, and one window focus per MIDI channel.

I’ve enjoyed programming it so far. It’s the most musical I’ve been in a long while. I will continue this as a further development focus.

The situation so far https://github.com/jackokring/supercollider-demos

The last three, are just an idea I had to keep the keyboard as a controller, and make all the GUI connection via continuous controllers only. I’ll keep this up to date as it develops.

Proof of Burn

Consider the proof of burn algorithm as an effective cryptocoin algorithm. Then turn this on it’s head. If a pseudo random burn density is exhibited, previous mined coins, can be used as the burn sink. If this process is considered an also method of a mine success, then the burn creates an associated make in the protocol. As this can be a fixed multiplier, it maintains a mint standard. An “I could have burnt block, proof of could have” as it were. And get coins for doing it.

Something like Slimcoin with a kick in, kick out coins balancer keeping the fixed-sh multiplier net around a centroid. And having “modulable” kicks within limits. An “alternative slicer” can split this transact delta into a “modulable” stream for storing data. So if a method of “moving root block” can maintain a maximal blockchain size, data compression and a fixed bound cryptocoin with some injected coin total supply “chaos” results.

QED.

Of course the minimalist right to sign and signing would work in effect, but would encourage opening many intermediate wallets. Burning these intermediate wallets also provides the final coins for the real wallet. This in essence is proof of work. The next step of proof of stake would be to put any chosen coin into the mix for the hash, to gain right to sign, as this would have the effect of maintaining a faster hashing with more coins to try, and block number and chain check hash would prevent rehashing doubles. Or as some have it, selection of the coin for the right of hash, a la premium bonds. Publishing private key hashes of intermediate wallets such that the coin can be “seen” as mined and owned looks promising. And maybe a right to sign transfer dole for the under coined, although this would negate its use as a bit exchange.

That all explains why some of the best are hybrid in the mechanistic building of the blockchain. The proof of burn allows for coin supply modulation, and on fixed supply coins a possible under representation over time. The idea that calculating something useful would be good is good at heart, but misses the easy re-seeding duplication necessity, as the useful result is an open plain text. In many ways Ethereum will excel at derivatives …

The whole crypto-currency market is full of clones with different marketing and some have a slight tech advantage. The main differences which are coming up are memory intensive hashes, and even some block chain as random data hashes. Proof of stake looks good in principal, but does have a non mining aspect, and so is locked out for new users to mine without initial stake, and in some sense why would they mine? In such a situation it’s not mining but interest payments for service. Which devalues the market without effort, which is sort of not getting paid without equitable burn.

Proof of work maybe environmentally inefficient, but better that the box does it.