AzGaz

“Azgaz. Propane and protein suppliers. Beans and ‘as beens for beings plus gas. … This has to be the best winter supply name.”

“Could later expand into biodigester funerals, propane tank cold boxes, efficient air Peltier blower rings, eggs and a coffee shop library warm room.”

I don’t think it’s as daft as it sounds. Basic base staple and maybe some quality brown wholemeal seedy bread. Perhaps with a carbohydrate warning. I suppose the funniest part of the vision dream was the “Posh beenegi”. Not so much the content, but the price list in LED, and a rolling supply demand accounting for running the cooking and service. Not your basic beans just run at supply cost and the miracle of prophetic markup weekly spot pricing.

You don’t know what was on the time since cum joke laugher discount line.

Looks like some kind of summer supplies line is needed. Onto thoughts of efficient freeze-dry desalination. So pressure in the hot tank is lower than in the cold tank. So the dual hydraulic opposed piston is balanced on an escapement of area. The compression hot end is routed through the hot tank, and vice-versa for transfer of heat by the oscillation of the escapement.

The final complexity is the salt accumulation in the hot tank. Some kind of screw extraction for the sale of “complete dried salt”. Likely requires some desalinate, clean cycle. So salt and distilled water? Or should that be remineralized water? I wonder how the “pump end inversion” with “routing to the other tank” affects efficiency? The S-balanced escapement could be thrown by a solar power linear actuator and some simple control electrics. With the sides of the tanks, it might look like a posh $ sign.

Gradient Optimization

So if a gradient descent hyper-parameter controlling the learning rate is the usual way, how can this possibly be improved? Considering that in some way the approximation of future gradient alterations is distributed depending on the batch, the stability via an average gives a more stable basis to then infer an accelerated projection of the future descent.

The biggest problem to consider is bound oscillation. When the accelerated projection is magnifying the learning delta to apply such that locality is an asymptotic non-convergent (reverse symmetry in summation acceleration by considering the divergent terms as “merging toward” the first term limit). This then would converge as a metaseries in some instances, but not all. It then becomes essential to scale the approximations by inverse power weighting to make a convergent for highly entropic unstable weights. It may also indicate that weight decomposition may be an effective strategy to obtain a neuron split into the stable (time aligned) and the unstable (time inverted) partitions of a signal.

Assuming the unstable partition has a repellor (opposite to an attractor in chaos), modelling could be used to invert the accelerated projection to the repellor. If the accelerated series is approximated by an integral, the unstable inverse acceleration would perhaps be a reversal of the limits of integration? Or a sign reversal of the limits?

In a sense the splitting of the network into a composition of multiple networks based on partitions related to the number of critical negative signs (or more precisely the number of things that could have negative signs). In this case just 1 sign for a time is like hyper-parameter convergence property. The algorithm after decomposition can then be specifically optimized per partition.  

Moonshine Elliptical

Moonshine Elliptical represents my latest combination of commentary and finding on the massively impressive tome of knowledge about elliptic curve theory (useful in cryptography and generally understanding space and time). Fields of character (the summing of multiplicative identity to equate to the additive identity) two, three and other along with factorization of parts of the world mechanic into finite simple groups (the extended concept of primes of which the primes are just one sequence).

1729

Boltzmann-Fermi-Dirac Colour Charges

It’s a long shot but imagine if you will a gluon made of two halves. The halves can each be drawn from the two “weights” (low and high of a non-zero-sum field) if they broke symmetry and the two “charges” like from the zero-sum field.

Given cancellation and combination, 8 gluons happen.

L+L+, L-L-, L+H-, L-H+, L+H+, L-H-, H+H+, H-H-.

So there’s more green weight “sticky” and the Boltzmann distribution for the half Bose-Einstein as a Fermi-Dirac perhaps. The blue colour perhaps travels less far due to higher “mass” (if it splits), but as the energy input in the strong force makes more gluons at a critical threshold, the further interaction has more energy and a less gluey implementation in blue.

I wonder if the QCD simulation evaluations can take this all into account for better accuracy. I put the two yellow “charges” in there which technically would be massed green, but given the charge +/- cancellation without perhaps LH equality would suggest a kind of neutral weight dipole.

EDIT: As the energy increases moving into “small x QCD” the expansion of colour coverage, to prevent saturation by a UV gluon density catastrophe, the critical temperature is exceeded and the “cooper pair” effect on the half bosons is removed and Pauli colour saturation removes the gluon density within the nucleon. Yes, this paragraph is unproven, but there must be some effect stabilising the UV catastrophe. This would also lead to a cyclic order of colour based on mass expression above the critical temperature for some critically small x.

K Ring Technologies Unlimited and MOND Galactic Equivalence

Due to the facinating non-working companies house service for email notification for upcomming filings which are needed not working in the COVID period I have miss the accounting date and K Ring Technologies Ltd, is to be struck off the companies register. All business is therefore to become sole trader until such time as sense resumes.

But apart from that I’ve extending the quantum uncertainty in a gravitational field idea and come up with

1/(r-r^2.dr)^2-1/(r+r^2.dr)^2

As a dipole expansive explination for dark matter and thruough the singularity of the first term an eventual repulsive dark energy kind of force. In the dipole limit in a galaxy for example the force will aproximate 1/r and so is effectively a MOND on the small galactic scale.

Parse Buffer Overflows? Dark Priorities.

Sounds like such fun. An irremovable or a point update fix on the press? https://github.com/jackokring/majar/blob/master/src/uk/co/kring/kodek/Generator.java sounds like fun too. Choices, choices? Amplified radial uncertainty of Δr.GMm.Δt≤ℏ.r2/2 was kind of the order of last night. Is it dark matter? Is tangential uncertainty in the same respect part of dark energy? The radial uncertainty in a sure instant of time, and the potential gravitational energy? A net inward force congruent with dark energy?

And a tangential version of the squared hypotenuse of radius and tangential uncertainty of radius resultant? That leads to a reduction of gravity at a large radius and is more like dark energy. More evidence for a spectrum of uncertainty amount hence the “less than equals” being simplistic on an actuality?

Oh, no I’ll have to investigate the last GET/POST before errors … how boring (last time an Indian) … guess who?

The Small Big G and Why Gravity?

As G the gravitational constant is small compared to other force constants this would make delta r be bigger in gravity for the same amplified ħ uncertainty. With the time accuracy of light arrival in the visible range, the radial uncertainty at a high radial distance integrates over the non-linearity of the 1/r^2 force, for a net inward. Tangentially, the integral would net a reduction in gravity.

Δr.GMm.Δt≤ℏ.r2/2

So a partial reason for dark matter and dark energy to be explained by quantum gravity. It’s a simple formula and Δv/Δt as a substitute for Δp=mΔv using F=ma=GMm/r2 in ΔxΔp≤ℏ/2 so the answer is approximate an r±Δr might be more appropriate for exacting calculations, and r2+Δr2 as a tangential hypotenuse.

As https://en.wikipedia.org/wiki/Coulomb%27s_law is 20 orders of magnitude higher the dark coulomb force will be 10 orders of radius larger for the same effect.

As the Mass by the Cube, and the Uncertainty by the Square.

As the distance increases to the centre of a gravitational lens, the uncertainty of the mass radially becomes significant so effectively reducing the minimal acceleration due to gravity, and growing the volume bulk integral of mass in uncertainty. The force delta would be inverse cubic, countered by the cubic growth in integration volume. The force would therefore in isotropy become a fixed quantity effect.

This is not even considering the potential existence of a heavy graviton, or the concept of conservation of a mass information velocity that would have a dark energy effect. It still seems “conservation of acceleration” is not even a taught effect considering there are many wine glasses that would have loved to know about it.

As for the rapid running constant increase toward the unification energy and what inner sun horizons would do to a G magnification? Likely not that relevant? Only the EM force seems to increase in coupling as the energy of the system dilates in time. This would imply the other three standard forces decrease, so necessitating an increase in radial uncertainty on average. The strong force has a with distance effect below the confinement distance, and so as the radius reduces, a Δr.k.Δt≤ℏ/2r rule is likely which would lead to the most likely reciprocal isomorphism of dark matter and dark energy.

Due to quark mass differences, and k, therefore, being one of 15 = 6*(6-1)/2 constants depending on the quark pair a triad product pentad structuring of force to acceleration might occur, with further splitting by boson interactions with quarks. Maybe this is a long shot to infer the finality on the low energy quark set of 6. Likely a totient in there for an 8. That’s all in the phi line and golden, silver and forcing theorems. I wonder if forcing theorems have unforcing and further forcing propergatives?

≤?

You could be right.  So? It’s not as though it affected any of the local accelerators I don’t have. If it’s all about the bit not understood, then as a product constraint, it is where the action is at. As the maths might work, I am speculating the further equations will be in a less than form and so need fewer corrections? Premature optimization is the root? Any tiny effect would be on that side of equality perhaps. Maybe it was just a tilt on the suggestion of an inverse isomorphism. I couldn’t say, but that’s how it exited my mind.

K Ring CODEC Existential Proof

When p=2q. L(0) is not equal L(1).

Find n such that (L(0)/L(1))^(2n+1) defines the number of bias elements for a certain bias exceeding 2:1. This is not the minimal number of bias elements but is a faster computation of a sufficient existential cardinal order. In fact, it’s erroneous. A more useful equation is

E=Sum[(1-p)*(1-q)*(2n-1)*(p^(n-1))*q^(n-1)+((1-p)^2)*2n*(q^n)*p^(n-1),n,1,infinity]

Showing an asymmetry on pq for even counts of containment between adding entropic pseudo-randomness. So if the direction is PQ biased detection and subsample control via horizontals and verticals position splitting? The bit quantity of clockwise parity XOR reflection count parity (CWRP) has an interesting binary sequence. Flipping the clockwise parity and the 12/6 o’clock location inverts the state for modulation.

So asymmetric baryogenesis, that process of some bias in antimatter and matter with an apparently identical mirror symmetry with each other. There must be an existential mechanism and in this mechanism a way of digitizing the process and finding the equivalents to matter and antimatter. Some way of utilizing a probabilistic asymmetry along with a time application to the statistic so that apparent opposites can be made to present a difference on some time presence count.

Proof of Topological Work

A cryptocoin mining strategy designed to reduce power consumption. The work is divided into tiny bits of work with bits of stall caused by data access congestion. The extensive nature of solutions and the variance of solution time reduce conflict as opposed to a single hash function solve. As joining a fork increases splitting of share focuses the tree spread into a chain this has to be considered. As the pull request ordering tokens can expire until a pull request is logged with a solution, this means pull request tokens have to be requested at intervals and also after expiry while any solution would need a valid pull request token to be included in the pull request such that the first solution on a time interval can invalidate later pull requests solving the same interval.

The pull request token contains an algorithmic random and the head random based on the solution of a previous time interval which must be used to perform the work burst. It, therefore, becomes stupid to issue pull request tokens for a future time interval as the head of the master branch has not been fixed and so the pull request token would not by a large order be checksum valid.

The master head address becomes the congestion point. The address is therefore published via a torrent-like mechanism with a clone performed by all slaves who wish to become the elected master. The slaves also have a duty to check the master for errors. This then involves pull-request submissions to the block-tree (as git is) on various forks from the slave pool.

This meta-algorithm therefore can limit work done per IP address by making the submission IP be part of the work specification. Some may like to call it proof of bureaucracy.

The Cryptoclock

As running a split network on a faster clock seems the most effective hack, the master must set the clock by signed publication. On a clock split the closest modulo hashed time plus block slave salt wins. The slave throne line is on the closest modulo hashed values for salt with signed publication. This ensures a corrupt master must keep all slave salts (or references) in the published blocks. A network join must demote the split via a clock moderation factor. This ensures that culling a small subnet to run at a higher rate to disadvantage the small subnet is punished by the majority of neutrals on the throne line in the master elective on the net reunion, by the punitive clock rate deviation from the majority. As you could split and run lower in an attempt to punify!

Estimated 50 pounds sterling 2021-3-30 in bitcoin for the company work done 😀

The Rebase Compaction Bounty (Bonus)

Designed to be a complex task a bounty is set to compress the blockchain structure to a rebased smaller data equivalent. This is done by effectively removing many earlier blocks and placing a special block of archival index terminals for non-transferred holdings in the ancient block history. This is bound to happen infrequently to never and set at a lotto rate depending on the mined percents. This would eventually cause a work spurt based on the expected gain. The ruling controlling the energy expenditure versus the archival cost could be integrated with the wallet stagnation (into the void) by setting a wallet timeout of the order of many years.

A form of lotto inheritance for the collective data duplication cost of historic irrelevance. A super computational only to be taken on by the supercomputer of the age. A method therefore of computational research as it were, and not something for everybody to do, but easy for everybody to check as they compact.

23

The classic 3*4+1+1+4+(9-1)/2+[this one @23rd]+(9-1)/2. For a total of 27. The whole 163 and x^2-x+41 Technetium (+2) connection. Interesting things in number theory along with sporadic groups and J4 which is the only one with an ordered factor of 43 and an 11^3. Promethium at 61 is connected somehow maybe by 12 * 62 = 744 with something not doing the 10 “f-orbitals” thing, and 23 comes in on the uniqueness of factorization too along with 105.  Along with the 18 families of groups 26(or 27)+18 = 44(or 45) in cubic elliptic varieties of the discriminant.

26 letters in the alphabet plus space? Rocks with patterned circles on an island? Considering one of the 44 is the circle integer modulo ring with no “torsion” then there is kind of 43 bending varieties and some kind of dimension null over a double bend “cover” inclusion as a half factor of one of the main 18 sequence groups. Likely a deep connection to factor square-free “Mobius mu” and topological orientability.

Polynomial Regression Estimators

Consider a sampled sequence of n samples and an interpolation of order n. The sample sequence can be differentiated by backward and forward differences of all n samples to make a first differential sequence of n elements or more. This too has a polynomial fit. The polynomial can be integrated to make an order n+1 polynomial with a new constant which can be estimated by a regression fit of the n samples. This can then make an n+1 th estimation to show a fit ad infinitum. Weighting the regression error based on sample time locks more history and less prediction into the forecast but fits less on the predictive end. Opposite the forecast is based on a forecast not based on history. In between is a concept of optimal.

A genetic algorithm optimizing the weighting provides a fitness score based on future measured truth. The population spread acts as a Monte-Carlo and some selection for spreading entropy as well as future weight would input entropy flair for efficiency by the association of prediction clustering elimination and outlier promotion for risk estimates. An irony of population size and death by accounting in genetic algorithms weeds out some ”bum notes’ ‘ but “right on” in the ill computed silicon heaven (via Lobb’s theorem of truth by confirmed assumption). Hence an eviction cache as in silicon hardware. What measures the crash instability of markets in the recession local optimum?

Yes, I do imply logic machines are operating reality. I do not think all the machines use the same operator algebra. Some algebras survive, some do not. There is nothing in the closure complexity of efficient algebras supporting the accumulation of axioms as leisure free from a suppressed fight.

And Physics

The number of light bosons stems from the cyclotomic of 18 (divisors 1, 2, 3, 6, 9, 18 and new roots 1, 1, 2, 2, 6, 6) for 18 normal bosons (6 free ones as 18-12 [not fermion bound], sounds like some regular “found bosons”) and if the equality of the mass-independent free space view to zero is just an approximation to the reciprocal of a small oscillation then a differential equation for such is just scaled by units of Hz2 and having which would place the cyclotomy at 20 (divisors 1, 2, 4, 5, 10, 20 and new roots 1, 1, 2, 4, 4, 8) for 20 dark bosons perhaps? Or maybe it works inversely for reducing the cyclotomy to 16 (divisors 1, 2, 4, 8, 16 and new roots 1, 1, 2, 4, 8) or 16 dark bosons?

Or “free dark bosons” at a tally of 2 (or -2)? I think I used η with a floating ~ (tilde) to indicate this secondary oscillation. Fermi exclusion unique factor domain expansion? Non-unique compaction “gravity”?

What tickles my mind is the idea of 2 “ultra free dark bosons” as an idea. Put another way <<So this Pauli exclusion of fermions. If bosons (some of them as theoretical) confine and attach to fermions giving them a slightly less than expected Pauli exclusion when confined. Does this imply a kind of “gravity-like” force? If the bosons exist in a Q[√-23] field or do the “a de Moivre number and p is a prime number. Unique factorizations of cyclotomic integers fail for p > 23.” provide dark energy like effect as all below 24 have more Pauli exclusion of state due to lack of degenerate factorization of a 23 particle “super-force”?>>

But 20, and an inverse of the Hz2 (+2,-2) => (*Hz2,/Hz2) @ ex for something like 23 is the prime larger than 20 itself an essential behaviour encompassing number, and 23 also is the prime less than 24 itself another essential behaviour encompassing number. Most exclusive field of 23 and a totient amongst many. So like the disjoint 23 feedback being maximal presents the most of its dark influence on dark, dark influence for zero black kinda dark.

15015 and 255255 on the Beyond

The peaks within and without crossing the R0 of gain into implementation in reality. Comprehensive ring gates and information transport and regenerative bits held fast by tallies of entropy. Rings within subsets in later fields may we walk into shining bright with the power of imaticity may we move toward imagionics and theory of technologies.

So the Hz2 must have come from somewhere. Equality of something being equal to a constant over the angular energy. An intuit that something with higher angular energy is more E=mc2 massive and has a greater boson intensity of flux. This multiplies with the bosonic cyclotomics to field-scale them. To keep within the small constant η if it is not zero but oh so close to it (relatively tiny and could be plank’s but this is not proven), the fermionic mass-independent factor has to shrink in scale by reducing velocities, accelerations and jerks making it more certain in nature maintaining the constancy of η. True enough it could be a simplistic gamble on the nature of energy density, or it could just be more flexible in quadrature of complex phase lead and lag shift from zero while still being “fast and loose”.

Free42 Android App Longer Term

A very nice calculator app. I’ll continue to use it. What would I change? And would I change what I’d changed? A fork with extras began and is in development.

  • I’d have a SAVE and LOAD with load varieties (LOADYLOADZ, LOADT for register and all stack registers higher if all 4 stack items are not to be restored along with LASTX) depending on restoring the right stack pattern after a behaviour which makes for first-class user-defined functions. SAVE? would return how many levels of saving there are.
  • Perhaps variables based on the current program location (or section). A better way of reducing clutter than a tree, while accessing the tree would need a new command specifying the variable context. This would lead to a minimal CONTEXT to set the LBL style recall context and use the THIS to set this context as per usual but without the variable in context clutter. A simple default to change the context when changing program space ensures consistency of being. In fact, nested subroutines could also provide a search order for an outer context. THAT could just remove one layer of the context, or more precisely change the current to the one below on the call stack such that THAT THAT would get the second nesting context if it exists. LSTO helps a little.
  • Some mechanics for the execution of a series term generator which by virtue of a modified XEQG (execute generator), could provide some faster summation or perhaps by flags a product, a sum, a term or continued fraction precision series acceleration.
  • Differential (numeric) and integral (endpoint numeric multiple kinds and all with one implicit bound of zero for constant at zero) algorithms that I would not reimplement them 😀 as I would like a series representation by perhaps an auto-generated generator. So XEQG would have a few cousins.
  • Although Mathematica solving might not give %n inserts for parameterizing a solution for constants, this does not prevent XEQG doing a differential either side sampling at high order and reducing it geometrically for a series estimation of the exact value. In terms of integral an integral of x^n.f(x) where n goes to zero provides the first bit of insight into integrals as convergent sets of series, with an exclusion NonconvergentAreaComplex[] on Godelian (made to make a method of solve fail) differential equations (or parts thereof). Checking the convergents of the term supplied to XEQG and cousins allows for sensible errors and perhaps transforms to pre-operators on the term provider function. SeriesRanged[] (containing an action as a function) list of for the other parts, with correct evaluation based on value, and how does this go multivariate? Although this looks out of place, it relates to series solutions of differential equations with more complex forms based on series of differentials. The integral of x.f(x)/x by parts as another giver of two more generators. The best bit is the “integral” from such a form is just evaluated at one endpoint (maybe subtraction for definite integrals) and as they include weighted series can be evaluated often by the series acceleration of a small number of differentials of the function to be integrated. The differentials themselves can be evaluated often accurately as a series converging as the delta is geometrically reduced with the improvements in the estimates being considered as new smaller terms in the series. So an integral evaluation might come down to (at 9 series terms per acceleration) about 2*90 function invocations instead of depending on the Simpson’s rule which has no series weighting to “accelerate” the summation. Also, integration up to infinity might be a simpler process when the limits are separated into two endpoint integrals as the summation over a limit to an estimation of convergence at infinity would not need as many conditional test cases on none, both and either one. As I think integrals should always return a function with parametric implicit constants, should not differentials return a parameterized function by default boolean the possibility of retrieving the faded constants? An offsetable self-recovery of diminished offset generic? SeriesRanged[Executive[]][ … ] 
  • Free42 Android
  • Perhaps an ACCESS command for building new generators (with a need to get a single generated) with a SETG (to set the generator evaluating ACCESS) and  XEQG can become just a set of things to put in SETG “…” making for easy generators of convergents and other structures. GETG for saving a small text string for nesting functions might be good but not essential and might confuse things by indirection possibilities. Just having a fixed literal alpha string to a SETG is enough as it could recall ACCESS operators on the menu like MVAR special programs (and not like INPUT programs). XEQG should still exist as there is the SETG combiner part (reducer) as well as the individual term generator (mapper) XEQG used for a variety of functions. This would make for easier operator definition (such as series functions by series accelerations or convergent limit differentials by similar on the reduction of the delta) without indirect alpha register calling of iterates.
  • A feature to make global labels go into a single menu item (the first) if they are in the same program, which then expands to all in the current program when selected for code management.
  • +R for addition with residual returning that fraction of the X that was not added to Y being returned in the X register and the sum returned in Y. This would further increase precision in some algorithms.

Rationale (after more thought and optimization)

  • Restoring the stack is good for not having to remember what was there and if you need to store it. Requires a call stack frame connection so maybe SAVE? is just call stack depth and so not required. (4 functions). LOAD, SAVE with some placing old loaded X into the last X with two commands before LOAD is called USE to indicate a stack consumption effect after restore and MAKE to leave one stack entry next lowest as an output.
  • Although local variables are good, in context variables would be nice to see. Clutter from other contexts is avoided or at least placed more keystrokes away from the main variables. This would also be easier to connect to the call stack frame. (3 functions) CONTXTTHIS and THATRCL tries CONTEXT before the call stack program associated variables. No code spams variables into other namespaces. STO stores into its associated variable space. This ensures an import strategy. The .END. namespace can be considered an initial global space so the persistence of its content upon GOTO . . is useful so XEQ “.END.” should always be available.
  • INTEG and SOLVE could be considered operators, but with special variables.  Separation of the loop to reduce on from the map function makes more general summation functions possible given single term functions. It would be more general to have 3 commands so that the reducer, the mapper and the variable to map could be all set, but is that level necessary? Especially since in use, a common practice of setting the reducer and applying it to different maps seems more useful. But consistency and flexibility might have PGMREDPGMMAP and MAPRED “var” for generality in one variable, with ACCESS in the reducer setting the right variable before executing the mapping. (4 functions).
  • Addition residual is a common precision technique. (1 function) +R.
  • I’d also make SOLVE and INTEG re-entrant (although not necessarily to a nested function call (a function already used in call stack frames stack check?)) by copying salient data on process entry along with MAPRED where the PGMRED set function can be used again and so does not need a nested reused check.
  • As to improvements in SOLVE, it seems that detection of asymptotes and singularities confuses interval bisection. Maybe adding a small amount and subtracting a small amount move actual roots but leave singular poles alone swamped by infinity. Also, the sum series of the product of the values and/or gradients may or may not converge as the pole or zero is approached.
  • Don’t SAVE registers or flags as this is legacy stuff. Maybe a quadratic (mass centroid) regression, Poisson distribution and maybe a few others, as the solver could work out inverses. Although there is the inconsistency of stack output versus variable output. Some way of auto-filling in MVAR from the stack and returns for 8 (or maybe 6 (XYZT in and X subtracted out, and …)) “variables” on the SOLVS menu? Maybe inverses are better functionality but the genericity of solvers are better for any evaluation. Allow MVAR ST X etc, with a phantom SAVE and have MRTN for an expected output variable before the subtraction making another “synthetic” MVAR or an exit point when not solving (and solving with an implicit – RTN and definite integrals being a predefinition of a process before a split by a subtractive equation for solving)? It would, of course, need MVAR LAST X to maybe be impossible (a reasonable constraint of an error speed efficiency certainty). (5+1 menu size). Redefinition of many internal functions (via no MVAR and automatic solver pre and postamble) would allow immediate inverse solves with no programming (SOLVE ST X, etc., with no special SOLVE RTN as it’s a plain evaluation). This makes MRTN the only added command, and the extra ST modes on the SOLVE and also a way of function specification for inbuilt ones.  The output to solve for can be programmatically set as the x register value when PGMSLV is executed and remembered when SOLVE is used next.
  • Register 24 is lonely. Perhaps it should contain weighted n, Σy but no it already exists. Σx2y seems better for the calculation of the weighted variance. That would lead to registers 0 to 10 being fast scratch saves. The 42 nukes other registers in ALLΣ anyway and I’d think not many programs use register 24 instead of a named variable. I’d be happy about only calculating it when in all mode, as I never switch and people who do usually want to keep register compatibility of routines for HP-41 code. Maybe PVAR for the n/(n-1) population variance transforms although this is an easy function to write by the user. A good metric to measure what gets added. Except for +R which is just looping and temporary variables for residual accumulation with further things to add assuming the LAST Y would be available etc.
  • I’d even suggest a mode using all the registers 0 to 10 for extra statistical variables and a few of those reserved flags (flag 64). I think there is at least 1 situation (chemistry) where quadratic regression is a good high precision idea. This makes REGS saving a good way of storing a stats set. Making the registers count down from the stats base in this mode seems a good idea. The following would provide quadratic regression with lin, log, exp and pow relation mapping on top of it for a CFIT set of 8 along with the use of R24 above. An extra entry on the CFIT MODL menu with indicator  for that enablement toggle of the extra shaping and register usage (flag 64 set) with an automatic enable of ALLΣ. As the parabolic constant would not be often accessed it would be enough to store it and the other ones after a fit, not interfering with live recalculation so as to not error by assumption. It would, of course, change the registers CLΣ sets to zero. Flag 54 can perhaps store the quadratic fitting model in mode. Quadratic Regression details. Although providing enough information to manufacture a result for the weighted standard deviation, it becomes optimal to decide to add WSD or an XY interchange mode on a flag to get inverse quadratic regression. Which would provide 12 regression curve options. The latter would need to extend the REGS array. FCSTQ might be better as a primary command to obtain the forecast root when the discriminant is square root subtracted negative as two forecast roots would exist. The most positive one would likely be more real in many situations. Maybe the linear correlation coefficient says something about the root to use and FCSTQ should use the other one?
    • R0 = correlation coefficient
    • R1 = quadratic/parabolic constant
    • R2 = linear constant
    • R3 = intercept constant
    • R4  = Σx3
    • R5 = Σx4
    • R6 = Σ(ln x)3
    • R7 = Σ(ln x)4
    • R8 = Σ(ln x)2y
    • R9 = Σx2ln y
    • R10 = Σ(ln x)2ln y
  • Flags still being about on the HP-28S was unexpected for me. I suppose it makes me not want to use them. The general user flags of the HP-41 have broken compatibility anyway as 11 to 18 are system flags on the HP-42S. There would be flags 67, 78, 79 and 80 for further system allocations.
  • I haven’t look if the source for the execution engine has a literal to address resolver with association struct field for speed with indirect handled by a similar manner, maybe even down to address function pointer filling in of checks and error routines like in a virtual dispatch table.
  • If endpoint integrals provide wrong answers, then even the investigation into the patterns of deviation from the true grail summate to eventually make them right in time. A VirtualTimeOptimalIngelCover[] is a very abstract class for me today. Some people might say it’s only an analytical partial solution to the problem. DivergantCover[] as a subclass of IngelCover[] which itself is a list container class of the type IngelCover. Not quite a set as removing an expansive intersection requires an addition of a DivergentCover[]. It’s also a thing about series summation order commutativity for a possible fourth endpoint operator.
  • MultiwayTimeOptimizer[ReducerExecutive[]][IngelCover[MapExecutive[]][]] and ListMapExecutiveToReturnType[] and the idea of method use object casting. And an Ingel of classes replaced the set of all classes.
  • I don’t use printing in that way. There’s an intermediate adapter called a PC tablet mix. The HP-41 was a system. A mini old mainframe. A convenience power efficiency method. My brother’s old CASIO with just P1 and P2 was my first access to a computational device. I’m not sure the reset kind of goto was Turing complete in some not enough memory for predicate register branch inlining.
  • ISO 7 Layer to 8 Layer, insert at level 4, virtualized channel layer. Provides data transform between transmit optimally and compute optimally. Is this the DataTransport layer? Ingel[AutomaticExecutive[]][].
    1. Paper
    2. (Media Codec)
    3. Symbols
    4. (Rate Codec)
    5. Envelope
    6. (Ring Codec) 3, 2 …
    7. Post Office
    8. (Drone codec)
    9. Letter Box
    10. (Pizza codec)
    11. Name
    12. (Index codec)
    13. Dear
  • Adding IOT as a toggle (flag 67) command in the PRINT menu is the closest place to IO on the Free42. Setting the print upload to a kind of object entity server. Scheduling compute racks with the interface problem of busy until state return. A command CFUN executes the cloud functions which have been “printed”. Cloud sync involves keeping the “printed” list and presenting it as an options menu in the style of CATALOG for all clouded things. NORM (auto-update publish (plus backup if accepted), merge remote (no global .END.)) and MAN (manual publish, no loading) set the sync mode of published things, while TRACE (manual publish, merge remote plus logging profile) takes debug logs on the server when CFUN is used but not for local runs. Merge works by namespace collision of local code priority, and no need to import remote callers of named function space. LIST sets a bookmark on the server.
  • An auto QPI mode for both x and y. In the DISP menu. Flag mode on in register 67. Could be handy. As could a complex statistics option when the REGS array is made complex. It would be interesting to see options for complex regression. As a neural node functor, a regression is suitable for propagation adaptation via Σ+ and Σ- as an experiment into regression fit minimization.

9+4+1+1+3*4=27 and a 9th Gluon for 26 Not

It still comes to mind that the “Tits Group gluon” might be a real thing, as although there seem to be eight, the ninth one is in the symmetry of self attraction, perhaps causing a shift in the physical inertia from a predicted instead of filled in constant of nature.

There would appear to be only two types of self dual coloured gluons needed in the strong nuclear force. As though the cube roots of unity were entering into the complex analysis that is within the equations of the universe.

9+4+1+1+3*4=27

The 3*4 is the fermionic 12 while the relativistic observational deviation from the abstract conceptual observation frame versus the actual moving observation particle provides for the cyclotomic 9+4+1+1 = 15 one of which is not existential within itself but just kind of a sub factor of one of the other essentials. Also it does point out a 3*5 that may also be somewhere.

Given the Tits Gluon, the number of bosons would be 14, which removing 8 for gluons, leaves 6, and removing 4 for the electrowek boson set would leave 2, and removing the Higgs, would leave 1 boson left to discover for that amount of complexity in the bosonic cyclotomic groups.

The fantastic implications of the 26 group of particles and the undelying fundementals which lead to strong complex rooted pairs, and leptonic pair set separation. Well, that’s another future.

Roll on the Plankon as good a name as any. The extension of any GUT beyond it would either be some higher bosonic cyclotomy or a higher order effect of fermions leading to deviation from Heisenburg uncertainty.

Up Charm Top
Down Strange Bottom
Electron Muon Tau
E Neutrino M Neutrino T Neutrino
H Photon W+
? Z0 W
Gluon Gluon Gluon
Gluon Tits Gluon Gluon
Gluon Gluon Gluon

Dimensions of Manifolds

The Lorentz manifold is 7 dimensional with 3 space like 1 time like and 3 velocity like, while the other connected manifold is 2 space like 1 time like, 2 velocity like and a dimensionless “unitless” dimension. So the 6 dimensional “charge” manifold has properties of perhaps 2 toroids and 2 closed path lines in a topological product.

Metres to the 4th power per second. Rate of change of a 4D spacial object perhaps. The Lorentz manifold having a similar metres to the 6th power per second squared measure of dimensional product. Or area per kilogram and area per kilogram squared respectively. This links well with the idea of an invariant gravitational constant for a dimensionless “force” measure, and a mass “charge” in the non Lorentz manifold of root kilogram.

Root seconds per metre? Would this  be the Uncertain Geometry secondary “quantum mass per length field” and the “relativistic invariant Newtonian mass per length field”. To put it another way the constant G maps the kg squared per unit area into a force, but the dimensionless quantity (not in units of force) becomes a projector through the dimensionless to force map.

GF*GD = G and only GF is responsible for mapping to units of force with relativistic corrections. GD maps to a dimensionless quantity and hence would be invariant. In the non Lorentz manifold the GMM/r^2 eqivalent would have in units of root kilogram ((root seconds) per metre), and GD would have different units too. Another option is for M to be quantized and of the form GM/r^2 as both the “charge” masses could be the same quantized quantity.

The reason the second way is more inconsistent with the the use of the product of field energies as the linear projection of force would give an M^2 over an r^2, and it would remove some logical mappings or symmetries. In terms of moment of inertia thinking, GMM/Mr^2 springs to mind, but has little form beyond an extra idea to test out the maths with.

W Baryogenisis Asymmetrical Charge

The split of W plus and minus into separate particle slots takes the idea that the charge mass asymmetry between electrons and protons can come from a tiny mass half life asymmetry. Charge cancellation of antiparticle WW pairs may still hold but momentum cancellation does not have to be exact, leading to a net dielectric momentum. Who knows an experiment to test this? A slight induced photon to Z imbalance on the charge gradient, with a neutrino emission. The cause of the W plus to minus mass ratio being a consequence of the sporadic group orders and some twist in very taught space versus some not as taught space or dimensionless expression of a symmetrically broken balance of exacts.

The observation of a dimensionless “unitless*” dimension being invariant to spacetime and mass density dilation. My brain is doing a little parallel axis theorem on the side, and saying 3D conservation of energy as an emerging construction with torsion being a dialative observable in taught spacetime.

Recent experiment of inertia of spin in neutrons provides a wave induction mechanism. Amplified remote observation of non EM radio maybe possible. Lenz’s law of counter EM cancellation may not apply. It is interesting. Mass aperture flux density per bit might be ok depending on S/N ratio. That reminds me of nV/root Hz. So root seconds is per root Hz, and nV or scaled Volts is Joules per mol charge, Z integer scale */ Joules, or just Joules or in Uncertain Geometry house units Hz. So Hz per root Hz, or just root Hz (per mol).

So root seconds per metre is per root Hz metre. As the “kilogram equivalent but for a kind of hypercharge” in the non Lorentz manifold perhaps. The equivalent of GD (HD) projecting the invariant to an actual force. By moving the dialative into GF and HF use can be made of invariant analysis. Mols per root Hz metre is also a possible QH in FHI = HDQHQH/R^2 the manifold disconnect being of a radius calculated norm in nature. A “charge” in per noise energy metre?

Beyond the Particles to the 18n of Space with a Tits Connection

Why I lad, it’s sure been a beginning. The 26 sporadic groups and the Tits group as a connection to the 18n infinite families of simple groups. What is the impedance of free space (Google), and does water become an increase or decrease on that number of ohms. Inside the nature of the speed of light at a refractory boundary, what shape is the bend of a deflection and what ohm expectations on the impedance to the progress of light?

Boltzmann Statistics in the Conduction of Noise Energy as Dark Energy

Just like ohm metres is a resistivity of the medium, it’s inverse being a conductivity in the medium, a united quantity relating to “noise energy or intensity” with a metres extra maybe an area over length transform of a bulk property of a thing. The idea a “charge” can be a bulk noise conductivity makes for an interesting question or two. Is entanglement conducted? Can qubits be shielded? Can noise be removed from a volume of space?

If noise pushes on spacetime is it dark energy? Is the Tits gluon connection an axion with extras conducting into the spacetime field at a particular cycle size of the double cover of the from 18n singular group which shall be known as the flux origin. 2F4(2)′, maybe the biggest communication opportunity this side of the supermassive black hole Sagittarius A*. 

The Outer Manifold Multiverse Time Advantage Hypothesis

Assuming conductivity, and locations of the dimensionally reduced holographic manifold, plus time relativistic dilation, what is the speed of light to entanglement conduction ratio possibilities?

As noise from entanglement comes from everywhere, then any noise directionality control implies focus and control of noisy amounts from differential noise shaped sources. Information is therefore not in the bit state, but in the error spectrums of the bits.

The inner (or Lorentz) manifold is inside the horizon, and maybe the holographic principal is in error in that both manifolds project onto each other, and what is inside a growing black remains inside, and when growth happens does the outer manifold completely get pushed further out?

A note on dimensionfull invariants such as velocities is that although they are invariant they become susceptible to environmental density manipulation where as dimensionless invariants are truly invariant in that there is no metre or second that will ever alter the scalar value. For example Planck’s constant is dimensionless in Uncertain Geometry house units.

So even though the decode may take a while due to the distance of the environmental entanglement and its influence on statistics, (is it a radius or radius square effect), the isolation of transmission via a vacuum could in principal be detected. Is there a relationship between distance, time of decode for relevance of data causality?

If the spectrum of the “noise” is detectable then it must have properties different from other environmental noise, such as being the answer to a non binary question and hopefully degenerative pressure eventually forces the projection of the counter solutions in the noise, allowing detection by statistical absence.

Of course you could see it as a way of the sender just knowing what had not been received, from basic entanglement ideas, and you might be right. As the speed of temperature conduction is limited by the speed of light and non “cool packed” atomic orbital occupancy as in the bulk controlled by photon exchange and not degenerative limits imposed by Pauli exclusion. A quantum qubit system not under vacuum of cooling does not produce the right answer, but does it statistically very slightly produce it more often? Is the calculation drive of the gating applying a calculative pressure to the system of qubits, such that other qubits under a different calculation pressure can either unite or compete for the honour of the results?

Quantum noise plus thermal noise equals noise? 1/f? Shott noise for example is due to carrier conduction in PN junction semiconductors, in some instances. It could be considered a kind of particle observation by the current in the junction which gets (or can be) amplified. I’m not sure if it is independent of temperature in a limited (non plasma like) range, but it is not thermal noise.

The Lambda Outer Manifold Energy in a More General Relativity

The (inner of the horizon) manifold described by GR has a cosmological constant option associated with it. This could be filled by the “gravitation of quantum noise conduction” symmetrical outer manifold isomorphic field with a multiplicative force (dark energy?) Such that the total when viewed in an invariant force measure picture is not complicated by the horizon singularities of the infinities from division by zero. Most notably the Lorentz contraction of the outer manifold as it passed through the horizon on expansion or contraction of the radius.

The radius itself not being invariant can not be cast to other observers to make sense, only calculated invariants (and I’d go as far to say dimensionless invariants) have the required properties to be shared (or just agreed) between observers without questions of relativistic reshaping. Communication does not have to happen to agree on this knowledge of the entangled dimensionless measure.

CMB Focus History

With the CMB assume a temperature bends due to density and distance from a pixel as a back step in time then becomes a new picture with its own fluctuations in density and hence bend to sum an action on a pixel for a earlier accumulation over pixels drifting to a bent velocity. Motion in the direction of heat moved further back in time. Anything good show up? Does the moment weight of other things beside an inverse square bend look a little different?

So as the transparency emission happened over a time interval, the mass should allow a kind of focus back until the opacity happens. Then that is not so much as a problem as it appears or not, as it is a fractional absorbtion ratio, and the transparency balance passed or crosses through zero on an extrapolation of the expectation of continuation.

Then there maybe further crossings back as the down conversion of the red shift converts ulta gamma into the microwave band and lower. The fact the the IF stage of the CMB reciever has a frequency response curve and that a redshift function maybe defined by a function in variables might make for an interesting application of an end point integral as the swapping of a series in dx (Simpson’s rule) becomes a series in differentials of the function but with an exponetial kind of weighting better applicable to series acceleration.

Looking back via a kind of differential calculus induction of function, right back, and back. The size of the observation appature will greatly assist, as would effective interpolation in the size of the image with some knowledge of general relativity and 3D distance of the source of the CMB.

To the Manifold and Beyond

Always fun to end with a few jokes so the one about messing with your experiment from here in multiple ways, and taking one way home and not telling you if I switched it off seems a good one. There are likely more, but today has much thought in it, and there is quite a lot I can’t do. I can only suggest CERN keep the W+ and W- events in different buckets on the “half spin anti-matter opposite charge symmetry, full spin boson anti-matter same charge symmetry as could just be any” and “I wonder if the aliens in the outer universe drew a god on the outside of the black hole just for giggles.”

 

Differential Modulation So Far

Consider the mapping x(t+1) = k.x(t).(1-x(t)) made famous in chaos mathematics. Given a suitable set of values of k for each of the symbols to be represented on the stream, preferably of a size which produces a chaotic sequence. The sequence can be map stretched to encompass the transmission range of the signal swing.

Knowing that the initial state is represented with an exact precision, and that all calculations are performed using deterministic arithmetic with rounding, then it becomes obvious that for a given transmit precision, it becomes possible to recover some pre-reception transmission by infering the preceeding chaotic sequence.

The calculation involved for maximal likelyhood would be involved and extensive to obtain a “lock”, but after lock the calculation overhead would go down, and just assist in a form of error correction. In terms of noise immunity this would be a reasonable modulation as the past estimation would become more accurate given reception time and higher knowledge of the sequence and its meaning and scope of sense in decode.

Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

A New Paper on Computation and Application

https://www.amazon.co.uk/Pipeline-Cache-Big-RISC-Computational-ebook/dp/B07XY9RSHH/ref=sr_1_1?keywords=pipeline+cache+big+risc&qid=1568807888&sr=8-1 is a nice paper on some computation issues, and eventually covers some politics and vitamin biochemistry. Not a fan? Still letting your biome let you shout at the bad people not feeding your hunger?

Shovel in the gammon all you want, and load it up with chips as a little survivor from ancient times takes advantage of the modern high carb diet and digs a hole for you.

Calculus

I don’t always get it wrong.

So it becomes a determined process to integrate. And as the two forms of integration closure are known, the process can be extended as any integration has closed form if the series converge. Integration by parts to a series. So why? The end points can have good integral estimates, and many in-between values of the function do not need evaluation. Series acceleration should be enough. Imagine an integral from zero to (m to power a times n to power b) which equals m times n. If for some a not equal b, the factor of m or n becomes obvious? The calculation would be log of the upper limit in polytime, not linear.

The previous page was:

Think about the f+c as integral of f plus a rectangle making f always positive when offset by c to give defined sign and hence binary search opportunity.

It wasn’t specifically developed to crack public key things, and the motivation was for simplified solutions to differential equations. Anyone who’s done DE solving knows the problem with them. That problem is integration and closing it to be algorithmic is a useful thing. That kind of leaves the Lambert W kind of collection of variables problem for real analytical DEs. Good.

It also sets a complexity limit on integration in terms of an analytic function and series of differential orders. The try a power series multiplied by ln x is seen as good advice, but lacking. Hypergeometric series can be reseen as useful to approach the series of this closure. It maybe helpful to decompose these closures into more fundamental sums of new special operators. And do some cancellation. If you find yourself pedantic about dx or plus C, then might I suggest you forget it and blunder on.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Kindle Fire (Pt. III)

A general complaint about Android devices is that when you’re low on power, and it always wants to switch on and waste it rather than wait until you press the on button. It’s part of the global always on spy network, designed for idiots with money and not for intelligent or off-grid people. Alexa likely wants to know your inside leg measurement. As I said this is general to all Android devices, so I suppose expecting more from Amazon was just too much.

I suppose it would be too much to edit things like the above equation on the device, but I will try to see if there is such an equation editing tool. Plenty of good calculators, but few typographical tools. I sometimes would like to do this. It’s not as though I need the mathematical assistance, more typographical layout, for including in documents.

It seems there is nothing which will do this offline. Maybe an app opportunity? Likely a long development. It depends on other tools such as MathML being hack-able into something else. Of course n=k in the above equation. A bit of maths in the “analytic closure of integration” to make it a deterministic process for a CAS (Computer Algebra System). It replaces integration (hard for computers to pattern match, and based on a large and incomplete knowledge base) with simultaneous equations and factorization.

There seem to be some downloaded episodes of some series happened this morning. Three free episodes (Number 1) of some random TV shows. I assume this is to get people into watching exciting stuff. I feel a bandwidth suck in the making. Ah, so it’s called “On Deck“, and although kind of interesting, it would be nice to make it only use certain WiFi networks. While on 4G hotspot proxy, it will make my bank account sad.

GEM Unification

The further result of adding in Coulomb force gradients into the theory of Uncertain Geometry. The GEM (Geometry/Gravity and Electro-Magnetism) Unification hints at the above table of particles. A mass genera of “Dark” matter (B), and some strange matter (A). The paper so far can be downloaded from Google Drive. I’m currently on the search for a suitable equation relating to the Weak force. I have no proof yet that it would be emergent, but the particle grid already includes a “dark matter” column (including a dark neutrino (yellow)), and a “not so dark” but very strange and heavy particles type A.

MaxBLEP Audio DSP

TYPE void DEF blep(int port, float value, bool limit) SUB
	//limit line level
	if(limit) value = clip(value);
	//blep fractal process residual buffer and blep summation buffer
	float v = value;
	value = blb[port] - value - bl[((idx) & 15) + 32 * port + 16];//and + residual
	blb[port] = v;//for next delta
	for(int i = 0; i < 15; i++) {
		bl[((i + idx + 1) & 15) + 32 * port] += value * blepFront[i];
	}
	value += bl[((idx) & 15) + 32 * port];//blep
	float r = value - (float)((int16_t)(value * MAXINT)) / (float)MAXINT;//under bits residual
	bl[((idx) & 15) + 32 * port + 16] = value * (blepFront[15] - 1.0);//residual buffer
	bl[((idx + 1) & 15) + 32 * port] += r;//noise shape
	idx++;
	//hard out
	_OUT(port, value - r);//start the blep
RETURN

Yes an infinite zero crossing BLEP. … Finance and the BLEP reduced noise of micro transactions

Block Tree Topological Proof of Work

Given that a blockchain has a limited entry rate on the chain due to the block uniqueness constraint. A more logical mass blocking system would used a tree graph, to place many leaf blocks on the tree at once. This can be done by assigning the fold of the leading edge of the tree onto random previous blocks, to achieve a number of virtual pointer rings, setting a joined pair of blocks as a new node in a Euler number mapping to a competition on genus and closure of the tree head leaf list to match block use demand.

The coin as it were, is the genus topology, with weighted construction ownership of node value. The data deciding part selection of the tree leaf node loop back pointers. The random, allowing a spread of topological properties in the proof of work space.