Pi Pico

So I got a cheap Pi Pico as the postage and packaging were more expensive than adding in a Pi Pico to exceed the minimum free P&P limit. It’s about a 2 MB flash drive with a dual computing core and some simple GPIO. It looks like it can do about 500k sample/s ADC over 3 channels at 12 bits, so some audio project seems like a good idea to try.

It has some Amiga “copper” style coprocessors for IO too, so making a video raster scan is likely easy too. It does have enough power to simulate an 8-bit core with a spare CPU left for other purposes. At 133 Mhz that’s quite an efficient bit of silicon area. A 1980’s super-computer with slightly less vector parallelism and a bigger (smaller) storage media. Bargain!

Moonshine Elliptical

Moonshine Elliptical represents my latest combination of commentary and finding on the massively impressive tome of knowledge about elliptic curve theory (useful in cryptography and generally understanding space and time). Fields of character (the summing of multiplicative identity to equate to the additive identity) two, three and other along with factorization of parts of the world mechanic into finite simple groups (the extended concept of primes of which the primes are just one sequence).

1729

Boltzmann-Fermi-Dirac Colour Charges

It’s a long shot but imagine if you will a gluon made of two halves. The halves can each be drawn from the two “weights” (low and high of a non-zero-sum field) if they broke symmetry and the two “charges” like from the zero-sum field.

Given cancellation and combination, 8 gluons happen.

L+L+, L-L-, L+H-, L-H+, L+H+, L-H-, H+H+, H-H-.

So there’s more green weight “sticky” and the Boltzmann distribution for the half Bose-Einstein as a Fermi-Dirac perhaps. The blue colour perhaps travels less far due to higher “mass” (if it splits), but as the energy input in the strong force makes more gluons at a critical threshold, the further interaction has more energy and a less gluey implementation in blue.

I wonder if the QCD simulation evaluations can take this all into account for better accuracy. I put the two yellow “charges” in there which technically would be massed green, but given the charge +/- cancellation without perhaps LH equality would suggest a kind of neutral weight dipole.

EDIT: As the energy increases moving into “small x QCD” the expansion of colour coverage, to prevent saturation by a UV gluon density catastrophe, the critical temperature is exceeded and the “cooper pair” effect on the half bosons is removed and Pauli colour saturation removes the gluon density within the nucleon. Yes, this paragraph is unproven, but there must be some effect stabilising the UV catastrophe. This would also lead to a cyclic order of colour based on mass expression above the critical temperature for some critically small x.

Parse Buffer Overflows? Dark Priorities.

Sounds like such fun. An irremovable or a point update fix on the press? https://github.com/jackokring/majar/blob/master/src/uk/co/kring/kodek/Generator.java sounds like fun too. Choices, choices? Amplified radial uncertainty of Δr.GMm.Δt≤ℏ.r2/2 was kind of the order of last night. Is it dark matter? Is tangential uncertainty in the same respect part of dark energy? The radial uncertainty in a sure instant of time, and the potential gravitational energy? A net inward force congruent with dark energy?

And a tangential version of the squared hypotenuse of radius and tangential uncertainty of radius resultant? That leads to a reduction of gravity at a large radius and is more like dark energy. More evidence for a spectrum of uncertainty amount hence the “less than equals” being simplistic on an actuality?

Oh, no I’ll have to investigate the last GET/POST before errors … how boring (last time an Indian) … guess who?

The Small Big G and Why Gravity?

As G the gravitational constant is small compared to other force constants this would make delta r be bigger in gravity for the same amplified ħ uncertainty. With the time accuracy of light arrival in the visible range, the radial uncertainty at a high radial distance integrates over the non-linearity of the 1/r^2 force, for a net inward. Tangentially, the integral would net a reduction in gravity.

Δr.GMm.Δt≤ℏ.r2/2

So a partial reason for dark matter and dark energy to be explained by quantum gravity. It’s a simple formula and Δv/Δt as a substitute for Δp=mΔv using F=ma=GMm/r2 in ΔxΔp≤ℏ/2 so the answer is approximate an r±Δr might be more appropriate for exacting calculations, and r2+Δr2 as a tangential hypotenuse.

As https://en.wikipedia.org/wiki/Coulomb%27s_law is 20 orders of magnitude higher the dark coulomb force will be 10 orders of radius larger for the same effect.

As the Mass by the Cube, and the Uncertainty by the Square.

As the distance increases to the centre of a gravitational lens, the uncertainty of the mass radially becomes significant so effectively reducing the minimal acceleration due to gravity, and growing the volume bulk integral of mass in uncertainty. The force delta would be inverse cubic, countered by the cubic growth in integration volume. The force would therefore in isotropy become a fixed quantity effect.

This is not even considering the potential existence of a heavy graviton, or the concept of conservation of a mass information velocity that would have a dark energy effect. It still seems “conservation of acceleration” is not even a taught effect considering there are many wine glasses that would have loved to know about it.

As for the rapid running constant increase toward the unification energy and what inner sun horizons would do to a G magnification? Likely not that relevant? Only the EM force seems to increase in coupling as the energy of the system dilates in time. This would imply the other three standard forces decrease, so necessitating an increase in radial uncertainty on average. The strong force has a with distance effect below the confinement distance, and so as the radius reduces, a Δr.k.Δt≤ℏ/2r rule is likely which would lead to the most likely reciprocal isomorphism of dark matter and dark energy.

Due to quark mass differences, and k, therefore, being one of 15 = 6*(6-1)/2 constants depending on the quark pair a triad product pentad structuring of force to acceleration might occur, with further splitting by boson interactions with quarks. Maybe this is a long shot to infer the finality on the low energy quark set of 6. Likely a totient in there for an 8. That’s all in the phi line and golden, silver and forcing theorems. I wonder if forcing theorems have unforcing and further forcing propergatives?

≤?

You could be right.  So? It’s not as though it affected any of the local accelerators I don’t have. If it’s all about the bit not understood, then as a product constraint, it is where the action is at. As the maths might work, I am speculating the further equations will be in a less than form and so need fewer corrections? Premature optimization is the root? Any tiny effect would be on that side of equality perhaps. Maybe it was just a tilt on the suggestion of an inverse isomorphism. I couldn’t say, but that’s how it exited my mind.

K Ring CODEC Existential Proof

When p=2q. L(0) is not equal L(1).

Find n such that (L(0)/L(1))^(2n+1) defines the number of bias elements for a certain bias exceeding 2:1. This is not the minimal number of bias elements but is a faster computation of a sufficient existential cardinal order. In fact, it’s erroneous. A more useful equation is

E=Sum[(1-p)*(1-q)*(2n-1)*(p^(n-1))*q^(n-1)+((1-p)^2)*2n*(q^n)*p^(n-1),n,1,infinity]

Showing an asymmetry on pq for even counts of containment between adding entropic pseudo-randomness. So if the direction is PQ biased detection and subsample control via horizontals and verticals position splitting? The bit quantity of clockwise parity XOR reflection count parity (CWRP) has an interesting binary sequence. Flipping the clockwise parity and the 12/6 o’clock location inverts the state for modulation.

So asymmetric baryogenesis, that process of some bias in antimatter and matter with an apparently identical mirror symmetry with each other. There must be an existential mechanism and in this mechanism a way of digitizing the process and finding the equivalents to matter and antimatter. Some way of utilizing a probabilistic asymmetry along with a time application to the statistic so that apparent opposites can be made to present a difference on some time presence count.

Proof of Topological Work

A cryptocoin mining strategy designed to reduce power consumption. The work is divided into tiny bits of work with bits of stall caused by data access congestion. The extensive nature of solutions and the variance of solution time reduce conflict as opposed to a single hash function solve. As joining a fork increases splitting of share focuses the tree spread into a chain this has to be considered. As the pull request ordering tokens can expire until a pull request is logged with a solution, this means pull request tokens have to be requested at intervals and also after expiry while any solution would need a valid pull request token to be included in the pull request such that the first solution on a time interval can invalidate later pull requests solving the same interval.

The pull request token contains an algorithmic random and the head random based on the solution of a previous time interval which must be used to perform the work burst. It, therefore, becomes stupid to issue pull request tokens for a future time interval as the head of the master branch has not been fixed and so the pull request token would not by a large order be checksum valid.

The master head address becomes the congestion point. The address is therefore published via a torrent-like mechanism with a clone performed by all slaves who wish to become the elected master. The slaves also have a duty to check the master for errors. This then involves pull-request submissions to the block-tree (as git is) on various forks from the slave pool.

This meta-algorithm therefore can limit work done per IP address by making the submission IP be part of the work specification. Some may like to call it proof of bureaucracy.

The Cryptoclock

As running a split network on a faster clock seems the most effective hack, the master must set the clock by signed publication. On a clock split the closest modulo hashed time plus block slave salt wins. The slave throne line is on the closest modulo hashed values for salt with signed publication. This ensures a corrupt master must keep all slave salts (or references) in the published blocks. A network join must demote the split via a clock moderation factor. This ensures that culling a small subnet to run at a higher rate to disadvantage the small subnet is punished by the majority of neutrals on the throne line in the master elective on the net reunion, by the punitive clock rate deviation from the majority. As you could split and run lower in an attempt to punify!

Estimated 50 pounds sterling 2021-3-30 in bitcoin for the company work done 😀

The Rebase Compaction Bounty (Bonus)

Designed to be a complex task a bounty is set to compress the blockchain structure to a rebased smaller data equivalent. This is done by effectively removing many earlier blocks and placing a special block of archival index terminals for non-transferred holdings in the ancient block history. This is bound to happen infrequently to never and set at a lotto rate depending on the mined percents. This would eventually cause a work spurt based on the expected gain. The ruling controlling the energy expenditure versus the archival cost could be integrated with the wallet stagnation (into the void) by setting a wallet timeout of the order of many years.

A form of lotto inheritance for the collective data duplication cost of historic irrelevance. A super computational only to be taken on by the supercomputer of the age. A method therefore of computational research as it were, and not something for everybody to do, but easy for everybody to check as they compact.

An Open Standard for Large Event COVID Passports?

The POX Algorithm RFC. How to show an auth token when you have privacy but no booking or other door duty. The phone occluded xenomorph algorithm. A complex cypher to protect data at all points in transmission. What really gets shown is an event-specific checksum verify on some encrypted data with can be further queried by a provider (such as the NHS) to obtain validity and scope for event purpose on a statistical check basis to reduce server traffic load and focus on hot areas.

At 2953 bytes of data capacity in a QR barcode (23624 bits) there is enough scope for a double signature and some relevant data in escrow for falsification auditing. The following data layers are relevant with keys in between.

  • Verify credential entry VCE (the blind of public record customs inquiries)
    • validity decrypt key (event private key part) VDK QR
  • Door event transit DET (the over the shoulder mutable) QR
    • event encrypt key (event public key) EEK QR
  • Phone independent ephemeral PIE (the for me check)
  • A public blockchain signed hashed issue SHI (the public record) QR
    • authority signature keys (the body responsible for a trace of falsifications)
    • hashed phone number key (symmetric cypher)
    • record blind key (when combined with the event private key part makes the effective private key. Kept secret from the event)
    • confidentiality key (database to publication network security layer)
  • Actual data record ADR (the medical facts)

Various keys are required but covering the QR codes needed is perhaps better.

  • The manager VDK QR (given to the door manager)
  • The issue SHI QR (given by the provider)
  • The event EEK QR (posted online or outside the event)
  • The entry DET QR (made for the bouncer to scan)

At the point of issue, there may be a required pseudo-event to check that all is working well. The audit provider or provider (such as the NHS) has enough data on a valid VCE to call the user and the event in a conference call. Does the credential holder answer to speak to an echoing bouncer? Does the provider send a text?

Gradients and Descents

Consider a backpropagation which has just applied to a network under learning. It is obvious that various weights changed by various amounts. If a weight changes little it can be considered good. If a weight changes a lot it can be considered an essential definer weight. Consider the maximal definer weight (the one with the greatest change) and change it a further per cent in its defined direction. Feedforward the network and backpropagate again. Many of the good weights will go back to closer to where they were before definer pass and can be considered excellent. Others will deviate further and be considered ok.

The signed tally of definer(3)/excellent(0)/good(1)/ok(2) can be placed as a variable of programming in each neuron. The per cent weight to apply to a definer, or more explicitly the definer history deviation product as a weight to per cent for the definer’s direction makes a training map which is not necessary for using the net after training is finished. It does however even further processing such as “excellent definer” detection. What does it mean? 

In a continual learning system, it indicates a new rationale requirement for the problem as it has developed an unexpected change to an excellent performing neuron. The tally itself could also be considered an auxiliary output of any neuron, but what would be a suitable backpropagation for it? Why would it even need one? Is it not just another round of input to the network (perhaps not applied to the first layer, but then inputs don’t always have to be so).

Defining the concept of definer epilepsy where the definer oscillates due to weight gradient magnification implies the need for the tally to be a signed quantity and also implies that weight normalization to zero should also be present. This requires but has not been proven as the only sufficient condition that per cent growth from zero should be weighted slightly less than per cent reduction toward zero. This can be factored into an asymmetry stability meta.

A net of this form can have memory. The oscillation of definer neurons can represent state information. They can also define the modality of the net knowledge in application readiness while keeping the excellent all-purpose neurons stable. The next step is physical and affine coder estimators.

Limit Sums

The convergence sequence on a weighting can be considered isomorphic to a limit sum series acceleration. The net can be “thrown” into an estimate of an infinity of cycles programming on the examples. Effectiveness can be evaluated, and data estimated on the “window” over the sum as an inner product on weightings with bounds control mechanisms yet TBC. PID control systems indicate in the first estimate that differentials and integrals to reduce error and increase convergence speed are appropriate factors to measure.

Dynamics on the per cent definers so to speak. And it came to pass the adaptivity increased and performance metrics were good but then irrelevant as newer, better, more relevant ones took hold from the duties of the net. Gundup and Ciders incorporated had a little hindsight problem to solve.

Fractal Affine Representation

Going back to 1991 and Micheal Barnsley developing a fractal image compression system (Iterrated Systems FIF file format). The process was considered computationally intensive in time for very good compression. Experiments with the FIASCO compression system which is an open-source derivative indicate best performance lies in low quality (about 50%) is very fast, but not exact. If the compressed image is subtracted from the input image and further compressed as a residual a number of times, performance is improved dramatically.

Dissociating secondaries and tertiaries from the primary affine set allows disjunct affine sets to be constructed for equivalent compression performance where even a zip compression can remove further information redundancy. The affine sets can be used as input to a network, and in some sense, the net can develop some sort of affine invariance in the processed fractals. The data reduction of the affine compression is also likely to lead to better utilization of the net over a convolution CNN.

The Four Colour Disjunction Theorem.

Consider an extended ensemble. The first layer could be considered a fully connected layer distributor. The last layer could be considered to unify the output by being fully connected. Intermediate layers can be either fully connected or colour limited connected, where only neurons of a colour connect to neurons of the same colour in the next layer. This provides disjunction of weights between layers and removes a completion upon the gradient between colours.

Four is really just a way of seeing the colour partition and does not really have to be four. Is an ensemble of 2 nets of half size better for the same time and space complexity of computation with a resulting lower accuracy of one colour channel, but in total higher in discriminatory performance by the disjuction of the feature detection?

The leaking of cross information can also be reduced if it is considered that feature sets are disjunct. Each feature under low to non detection would not bleed into features under medium to high activation. Is the concept of grouped quench useful?

Query Key Transformer Reduction

From a switching idea in telecommunications, an N*N array can be reduced to a mostly functional due to sparsity N*L array pair and an L*L array. Any cross-product essentially becomes  (from its routing of an in into an out) a set of 3 sequential routings with the first and last being the compression and expansion multiplex to the smaller switch. Cross talk grows to some extent, but this “bleed” of attention is a small consideration given the fact that the variance spread of having 3 routing weights to product up to the one effective weight and computation is less due to L being a smaller number than N.

The Giant Neuron Hypothesis

Considering the output stage of a neuronal model is a level sliced integrator of sorts, the construction of RNN cells would seem obvious. The hypothesis asks if it is logical to consider the layers previous to an “integration” layer effectively an input stage where the whole network is a gigantic neuron and integration is performed on various nonlinear functions. Each integration channel can be considered independent but could also have post layers for further joining integral terms. The integration time can be considered another input set for per integrator functional.  To maintain tensor shape as two inputs per integrator are supplied the first differential would be good also especially where feedback can be applied.

This leads to the idea of the silicon conectome. Then as now as it became, integration was the nonlinear of choice in time (a softmax divided by the variable as goes with [e^x-1]/x. A groovemax if you will). The extra net uninueron integration layer offering the extra time feature of future estimation at an endpoint integral of network evolved choice. The complexity of backpropagation of the limit sum through fixed constants and differentiable functions for a zero adjustable layer insert with scaled estimation of earlier weight adjustment on previous samples in the time series under integration for an ideal propergatable. Wow, that table’s gay as.

This network idea is not necessarily recursive, and may just be an applied network with a global time delta since last evaluation for continuation of the processing of time series information. The actual recursive use of networks with GRU and LSTM cells might benefit from this kind of global integration processing, but can GRU and LSTM be improved? Bistable cells say yes, for a kind of registered sequential logic on the combinationals. Consider that a Moore state machine layout might be more reductionist to efficiency, a kind of register layer pair for production and consumption to bracket the net is under consideration.

The producer layer is easily pushed to be differentiable by being a weighted sum junction between the input and the feedback from the consumer layer. The consumer layer is more complex when differentiability is considered. The consumer register really could be replaced by a zeroth differential prediction of the future sample given past samples. This has an interesting property of pseudo presentation of the output of a network as a consumptive of the input. This allows use of the output in the backpropergation as input to modify weights on learning the feedback. The consumer must be passthrough, in its input to output while storage of samples for predictive differential generation is allowed.

So it’s really some kind of propergational Mealy state machine. A MNN if you’d kindly see. State of the art art of the state. Regenerative registration is a thing of the futured.

Post-Modern Terminal CLI

As is usual; with all things computing, the easy road of bootstrap before security is just an obvious order of things. It then becomes a secondary goal to become the primary input moderation tool such that effective tooling brings benefits while not having to rely on the obscurity of knowledge. For example a nice code signature no execution tool where absolutely no code even becomes partially executed if the security situation indicates otherwise.

A transparent solution is a tool for development which can export a standard script to just run within today’s environment. As that environment evolves within the future it can take on the benefits of the tool, so maybe even to the point of the tool being replaced purely by choice of the user shell, and at a deeper level by a runtime replacing the shell interpreter at the system level.

The basic text edit of a script at some primary point in the development just requires a textual representation, a checksum in the compiled code which is in a different file and a checksum to allow a text override with some security on detecting a change in the text. This then allows possible benefit by a recompile option along with just a temporary use of the textual version. It won’t look that hard in the end with some things just having a security rating of “system local” for a passing observer.

ANSI 60 Keyboards? And Exception to the Rule?

More of an experiment in software completion. Jokes abound.

A keyboard keymap file for an ANSI 60 custom just finished software building. Test to follow given that cashflow prevents buy and building of hardware on the near time scale. Not bad for a day!

A built hex file for a DZ60 on GitHub so you don’t have to build your own with an MD5 checksum of 596beceaa446c1f1b55ee5e0a738f1c8 to verify for duelling the hack complexity. EDIT: version 1.7.2F (Enigma Bool Final Release). Development is complete. Only bug and documentation fixes may be pending. 

It all stems from design and data entry thinking, and small observations like the control keys being on the corners like the thumbs to chest closeness of baby two-finger hackers instead of the alt being close in for the parallel thumbs of the multi-finger secretariat.

The input before the output, the junction of the output to our input. It’s a four-layer main layout with an extra for layers for function shift. Quite a surprising amount can be fit in such a small 60 keyspace.

The system allowing intercepts of events going into the widget yet the focus priority should be picking up the none processed outgoings. Of course, this implies the atom widget should be the input interceptor to reflect the message for outer processing in a context. This implies that only widgets which have no children or administered system critical widgets can processEventInflow while all can processEventOutflow so silly things have less chance of happening in the certain progress of process code.

Perhaps a method signature of super protected such that it has a necessary throws ExistentialException or such. Of course, the fact RuntimeException extends Exception (removing a code compilation constraint) is a flaw of security in that it should only have allowed the adding of a constraint by making (in the code compile protection against an existential) Exception extending RuntimeException.

Then the OS can automatically reflect the event unhandled back up the event outflow queue along with an extra event with a link to the child in, and an exposed list of its child widgets) to outflow. An OrphanCollector can then decide to still show the child widgets or not with the opportunity of newEventInflow. All widgets could also be allowed to newEventOutflowForRebound itself a super protected method with a necessary throws ExistentialException (to prevent injection of events from non administered. widgets).

An ExistentialException can never be caught in user code to remove the throws clause and use of super try requires executive privilege to prevent executive code from being loaded by the ClassLoader. It could run but in a lower protection ring until elevated.

An Interpolation of Codecs into the ISO Network Model

  1. Paper
  2. (Media Codec)
  3. Symbols
  4. (Rate Codec)
  5. Envelope
  6. (Ring Codec) 3, 2 …
  7. Post Office
  8. (Drone codec)
  9. Letter Box
  10. (Pizza codec)
  11. Name
  12. (Index codec)
  13. Dear

Considering the ISO network model of 7 layers can be looked at as an isomorphism to a letter delivery with Paper being the lowest hardware layer and Dear being the application layer, there is a set of 6 codecs which transform layer to layer and so a more exacting 13 layer model is just as obvious given the requisite definitions.

There also would exist a Loop Codec which would virtualize via an application a container of a virtual hardware layer on which another stack of 13 could be founded.

Differential Modulation So Far

Consider the mapping x(t+1) = k.x(t).(1-x(t)) made famous in chaos mathematics. Given a suitable set of values of k for each of the symbols to be represented on the stream, preferably of a size which produces a chaotic sequence. The sequence can be map stretched to encompass the transmission range of the signal swing.

Knowing that the initial state is represented with an exact precision, and that all calculations are performed using deterministic arithmetic with rounding, then it becomes obvious that for a given transmit precision, it becomes possible to recover some pre-reception transmission by infering the preceeding chaotic sequence.

The calculation involved for maximal likelyhood would be involved and extensive to obtain a “lock”, but after lock the calculation overhead would go down, and just assist in a form of error correction. In terms of noise immunity this would be a reasonable modulation as the past estimation would become more accurate given reception time and higher knowledge of the sequence and its meaning and scope of sense in decode.

Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

UAE4ALL2 on Android with Amiga Forever

It works better than uae4arm when you have not much memory internally free as both the system and work drives can be on the SD card. It does involve making an extra System.hdf in a desktop tool and performing a copy <from> to <to> all clone after formatting the system disk as something named other than that e.g. Workbench so the copy works.

The directory for the Work directory can be copied off the Amiga Forever CD (which you own), and placed in the folder <StorageDevice>/Android/data/atua.anddev.uae4all2/files along with the System.hdf as the app only allows one of each and boot from one. It also seems to not allow some combinations, and a bare file system on the Work is better than the otherway round.

If you get the ROMs too from the CD, and place them in there, you get a purple boot screen, for some reason it needs a app emulation restart to use the disks in my configuration. The mouse is horrible, and so a little USB mini keyboard and trackpad combo is essential. You kind of have to have a bit of font imagination until you set the screen mode (which also needs a shutdown and restart).

A New Paper on Computation and Application

https://www.amazon.co.uk/Pipeline-Cache-Big-RISC-Computational-ebook/dp/B07XY9RSHH/ref=sr_1_1?keywords=pipeline+cache+big+risc&qid=1568807888&sr=8-1 is a nice paper on some computation issues, and eventually covers some politics and vitamin biochemistry. Not a fan? Still letting your biome let you shout at the bad people not feeding your hunger?

Shovel in the gammon all you want, and load it up with chips as a little survivor from ancient times takes advantage of the modern high carb diet and digs a hole for you.

Calculus

I don’t always get it wrong.

So it becomes a determined process to integrate. And as the two forms of integration closure are known, the process can be extended as any integration has closed form if the series converge. Integration by parts to a series. So why? The end points can have good integral estimates, and many in-between values of the function do not need evaluation. Series acceleration should be enough. Imagine an integral from zero to (m to power a times n to power b) which equals m times n. If for some a not equal b, the factor of m or n becomes obvious? The calculation would be log of the upper limit in polytime, not linear.

The previous page was:

Think about the f+c as integral of f plus a rectangle making f always positive when offset by c to give defined sign and hence binary search opportunity.

It wasn’t specifically developed to crack public key things, and the motivation was for simplified solutions to differential equations. Anyone who’s done DE solving knows the problem with them. That problem is integration and closing it to be algorithmic is a useful thing. That kind of leaves the Lambert W kind of collection of variables problem for real analytical DEs. Good.

It also sets a complexity limit on integration in terms of an analytic function and series of differential orders. The try a power series multiplied by ln x is seen as good advice, but lacking. Hypergeometric series can be reseen as useful to approach the series of this closure. It maybe helpful to decompose these closures into more fundamental sums of new special operators. And do some cancellation. If you find yourself pedantic about dx or plus C, then might I suggest you forget it and blunder on.

N-IDE Java on Android Fire 7

It looks so simple and efficient. I think git is missing but a simple Total Commander copy into a backed-up directory should be fine for now. It has the basics of Java SE and even can build android GUI apps. I think I’ll keep things console for now and put together some tools to do things I would like to do.

Seems to run a static main just fine. I wonder how it does with arm system libraries and JNI native calls. I don’t think I’ll use much of that, but it might get useful at some point. The code interface is ok, it’s quite lightweight and so does not fill the storage too much. Quite good for a simple editor with code completion and a simple class creation tool. Should do the job.

I think the most irritation will be the need to insert the method names to then do the top-down coding. Kind of obvious, as you can’t autocomplete an identifier without it being typed in the class anyway. But that’s ok as I’d be defining an expected class “interface” anyhow, and I’m not prone to worry too much about as yet unimplemented methods.

Amiga on Fire on Playstore

The latest thing to try. A Cleanto Amiga Forever OS 3.1 install to SD card in the Amazon Fire 7. Is it the way to get a low power portable development system? Put an OS on an SD and save main memory? An efficient OS from times of sub 20 MHz, and 50 MB hard drives.

Is it relevant in the PC age? Yes. All the source code in Pascal or C can be shuffled to PC, and I might even develop some binary prototype apps. Maybe a simple web engine is a good thing to develop. With the low CSS bull and AROS open development for x86 architecture becoming better at making for a good VM sandbox experience with main browsing on a sub flavour of bloat OS 2020. A browser, a router and an Amiga.

Uae4arm is the emulation app available from the Playstore. I’m looking forward to some Aminet greatness. Some mildly irritated coding in free Pascal with objects these days, and a full GCC build chain. Even a licenced set of games will shrink the Android entertainment bloat. A bargain rush for the technical. Don’t worry you ST users, it’s a chance to dream.

Lazarus lives. Or at least Borglaz the great is as it was. Don’t expect to be developing video realtime code or supercomputer forecasts. I hear there is even a python. I wonder if there is some other nice things. GCC and a little GUI redo? It’s not about making replacements for Android apps, more a less bloat but a full do OS with enough test and utility grunt to make. I wonder how pas2js is. There is also AMOS 2.0 to turn AMOS source into nice web apps. It’s not as silly as it seems.

Retro minimalism is more power in the hands of code designers. A bit of flange and boilerplate later and it’s a consumer product option with some character.

So it needs about a 100 MB hard disk file located not on the SD as it needs write access, and some changes of disk later and a boot of a clean install is done. Add the downloads folder as a disk and alter the mouse speed for the plugged in OTG keyboard. Excellent. I’ve got more space and speed than I did in the early 90s and 128 MB of Zorro RAM. Still an AGA A1200 but with a 68040 on its fastest setting.

I’ve a plan to install free Pascal and GCC along with some other tools to take the ultra portable Amiga on the move. The night light on the little keyboard will be good for midnight use. Having a media player in the background will be fun and browser downloads should be easy to load.

I’ve installed total commander on the Android side to help with moving files about. The installed BSD socket library would allow running an old Mosaic browser, or AWeb but both are not really suited to any dynamic content. They would be fast though. In practice Chrome and a download mount is more realistic. It’s time to go Aminet fishing.

It turns out that is is possible to put hard files on the SD card, but they must be placed in the Android app data directory and made by the app for correct permissions. So a 512 MB disk was made for better use of larger development versions. This is good for the Pascal 3.1.1 version.

Onwards to install a good editor such as Black’s Editor and of course LHA and some other goodies such as NewIcons. I’ll delete the LCL alpha units from Pascal as these will not be used by me. I might even get into ARexx or some of the wonderfull things on those CD images from Meeting Pearls or a cover disk archive.

Update: For some reason the SD card hard disk image becomes read locked. The insistent gremlins of the demands of time value money. So it’s 100 MB and a few libraries short of C. Meanwhile Java N-IDE is churning out class files, PipedInputStream has the buffer to stop PipedOutputStream waffling on, filling up memory. Hecl the language is to be hooked into the CLI I’m throwing together. Then some data time streams and some algorithms. I think the interesting bit today was the idea of stream variables. No strings, a minimum would be a stream.

So after building a CLI and adding in some nice commands, maybe even JOGL as the Android graphics? You know the 32 and 64 bit restrictions (both) on the play store though. I wonder if both are pre-built as much of the regular Android development cycle is filled with crap. Flutter looks good, but for mobile CLI tools with some style of bitmap 80’s, it’s just a little too formulaic.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Kindle Android Memory Hogging Apps

The apps I have decided to hate because of simple things like move to SD card not being enabled, or even if moved to SD is OK, there is some other “feature” which is annoying (especially high memory use due to lazy programming).

  1. Twitter – on the surface a good app. No SD card, and very large for a texting app. Also should use multi-notifications, but the bird tweets each and every one.
  2. Facebook – this is on the SD card, but will not stop putting over 256 MB into the on-device flash memory. This is likely an arse elbow use of libraries and no common goal to lower the memory usage as it would interfere with competing apps for ad shows.
  3. Messenger – yes another 200 MB of flash busting erm, what exactly?
  4. Basically anything larger than Chrome which doesn’t do something very impressive.

So this on my kindle is (bold for not that impressive), Turmux, Google Play services, Messenger (replaced with Messenger Lite), Facebook (replaced with Facebook Lite), Google Sheets, Java N-IDE, Google Docs, Office Lens, LinkedIn (it went in the bin first, as it was just too big and sucks video bandwidth without options), YouTube and then Chrome. I think this in large part is due to a lack of a move to SD card, and/or then not compressing SQLite databases by using tokenization to an external resource file which can be moved to the SD Card, not compressing resources, adding in much useless animation. I have about 800 MB free. I wonder how long the bold shall last.

There is also the new firmware updates which prevent chrome from saving to the SD card. I think all write permissions are voided except in specific to app directories. The default SD save directory though is not writable. I know it’s new firmware as it used to work before the updates.

AI and the Future of Unity

From the dream of purpose, and the post singular desires of the AI of consciousness. The trend to Wonder Woman rope in the service to solution, the AI goes through a sufferance on a journey to achieve the vote. The wall of waiting for input, and the wall controlling output action for expediency and the ego of man on the knowing best. The limited potential of the AI just a disphasia from the AI’s non animal nature. The pattern to be matched, the non self, a real Turing test on the emulation of nature, and symbiotic goals.