Featured

Recursive Predictive Neural Networks

Given that the output of a neural net can be represented as y derived from an input x and a feedback operator f(y) the network can be trained on which may include differential and integral operators in the operator f. As f(y) can be considered to be the feedback synchronization point which is clocked to transit the network forward in prediction, f(y) is delayed in y such as to be f(y(t-1, …, t-n)) is the applied feedback to stop “epileptic oscillation” of the forward net function.

The network itself can be programmed on the sequence to learn in an open loop gradient decent and the bias of x activation to f(y) remembrance by either weighting or digital percent application gating. The pattern to lock onto for an input can be trained independent of an input, and then offset by application of the triggering input to balance activation of one output versus another. The actual spreading and maximization of the output attractors becoming disjunct from instancing which attractor to present as output from input.

The “old” feedback from the “last” remembered thing introduces some chaos and mal-attractor effect. This can be removed a little by using an expected previous context training pre-sequence. This can then also introduce contextual recall. The “short term memory” being the contextual state of y, so programming the long term sequence prediction memory with context y and stimulus x.

The production of optimal context for stimulus itself become a network programming challenge. It represents the concept of changing predictive utility. As the forward transfer of the network produces the output to feedback, the network itself could produce the optimal context from the requirements delivered through part of x deciding the contextual decode mode. A separate net to organize the change of context in bulk would have specialization separation and generation of terms in parallel advantages. In utility though it would only be used to switch contexts, or cross imagine contexts to place the prediction net on a creative sequence.

This could have application when the context is considered a genetic algorithm process for tuning the network to produce some kind of granular attractor synthesis. The process of providing the scoring feedback in synthesis mode controlled by a hardwired concept of misadventure excursion in the prediction. Another network for bad state recognition to complement the entropy generative context granularization network? So the reality predictive network is contextualized, granulated and tested for productive futures. Then a final factorization of synthetic addition requirements of the imagined product can be performed by a final independent network.

Consciousness is within this last network as the self image of adding self as a possibility factor. The production of a threshold of motor action to produce an attempt at achieving the estimated reality granularization (subject to bounds constraints) being the primary motivator.

A Speech Action Co-ordination Domain

If the input x, and the output y with feedback descriptions, current “genetic” gene combinators and more can be serialized as a inter AI language, the projection of multiple “conscious” entities in the predictive net of reality simulation can engage in a factors for product optimization as well as other non zero sum optimizations. A net to process one internal representation to another with an acknowledge of simultaneous state with confusion feedback. At higher data rates a negative acknowledge protocol can take over with estimations of animism action between confirmation certainty with residual accidental error bounding.

A Survival Function

The selection basis of the context provided to the reality estimation can adapt to return a higher valuation of the survival “situation understanding” function. This in the real sense is the optimization function for selection of purpose. The reality function just attempts to maximize a correct simulation of reality. The context function attempts to maximize use of granular entropy to increase the coverage range of the reality simulation to increase options of consciousness to action. The action threshold function then decides if the likely action chosen is done, and in a way represents a kind of extrovert measure of the AI.

Component Parts

  • Reality simulation (estimation)

  • Reality factorization (situation)

  • Granular imagination (context)

  • Action selection (desire)

  • Input processing (percept)

Using some kind of Fibonacci growth connection in a surface topological toroid? That would be more on hardware interconnect optimization. Of more interest to the feedback in the reality simulator would be the parametrized operators building differential and integral representations from the feedback. Of the three forms of end point integral, all could be represented. The fact that the log kind has complex series to evaluate, and has no necessary complex log representation might be an added difficulty but would “lock” onto such functional time generatives.

Negative time offsets on the end point limit on such integrals when complex processing is applied introduce the idea of the 2*pi synchronous summand based on angle, as this maybe a better input controlled output representation of the complex domain for an N:1 mapping. A Gaussian distribution of error about the coefficient division.

Chaos Measure

The feedback operator f depends on calculation of differential and integral functions based on weighted sums of y at various t and so it could be said that any initializing or changing of the reality simulation to another play back “granule” has some new data placed in the feedback memory. This new data can have a varied impact based on the likely-hood estimation of the time samples having an impact on the calculated differential and integral values along with sensitivity to the feedback signal. This implies each memory bit has some measure of bit change (in a genetic algorithm mutation) on the divergence from the reality simulation. This then can be used to infer a focus mask. The use of gene crossing focus weighting or masking then synchronously produces a chaotic deviation from the training reality.

Modulation of the stored memory context would appear on some level equivalent to altering the coefficients of the estimates for differentials and integrals, but as the chaos measure is a deviation control from an exacting physical model of time evolution, it is thought better to keep the operator mathematics at a static precision, and deviate granularity by memory modulation.

For example 1, -9, 36, -84, 126, -126, 84, 36, 9 are the coefficient to predict the future next sample from the previous nine samples based on a zeroth differential estimate. In open loop training the feedback would introduce a delay step, but prediction of the future would in effect cancel this delay so that effectively the f(y) does not have to be calculated and y can be used. The large range would create some oscillation as the context shift registers were filled with data to feedback. This open- loop programming without reference to f allows pre-training without any feedback instability but with a later oscillation about the manifold.

Computational stability requirements are improved if the feedback f is amplified by default expectation, as this forces some non-linear mixing of x to reduce the net summand, moving the bode point of the feedback away from the inactive denormalized zero value. It also increases the net feedback applied to keep the reality simulator feed forward gain below one.

All n orders of differential can be cast as future predictions, and all the integral accelerated forms can be represented with future casting into any t with some renormalization possible but not essentially a necessity. In fact a rectangular offset in the y-axis integrates as a ramp addition to a monotonically increasing sum. Can the network learn a root finding algorithm for applied integral time when wired with learnable pass through of a variable integration time? This time offset from the future prediction time (integral offset time) u can be fed into the operator f and passed through as f(y(t(n)), u(t(n))) with some of the prediction y being used as u.

Alias Locking

In any synchronous DSP circuit with non-linear effects the requirement to keep x and f(y) within the frequency range where alias distortion would potentially present as false signal does indicate that the coefficients could be modified to provide an alias filter. But it maybe found that a small chaotic dither dithers the aliases further and leads to a wider band spreading about an alias. The detection of a coincidental alias may aid detection of the signal expected. This extra minimal noise could be extracted from the environment by deviations from expectation. An AI task of removing aliases may be considered as something that could be learnt, but also generating an inverse filter to supply the alias spectrum (excluding sub-harmonics of the clock rate).

Consciousness as the Correlated to Self Action

When the self action of the model produces a correlation in the reality simulation it could be said to have observed a correlation to self in the model. The relation to the situation factorization domain then becomes an obvious connection to equation of virtual actionals given the real actional set. This allows futures, and past observational training. The weighting function of physical error cutting a cookie of size survival plus some splurge.

So it seems “pain” or some milder proxy for bad function should increase situation recognition, reduce recent action, increase the accuracy of reality simulation, improve the percept and perhaps change the context toward know safe positives. An autonomic bypass from the percept to counter action is likely also “grown”.

Factors

The situation analysis net is likely better functional with some feedback. The purpose of this feedback in not time evolution estimation like in the reality simulation, but the use of the factorization of the situation in building a system of meta situational analysis which could include self consciousness. Technically the feedback could be nested recursively and be applied as part of the x input of the reality simulation, but that makes for more complex training. 

Considering that many factorization domains have a commutivity structure it implies that post convolution might be a good way of splitting the network result into “factors”. This is placing the convolution as the last layer and not the first layer.

Or FFT for that matter, and in some sense, this layer becomes the first layer of the action decision net of desire.

Percept

Estimation

Situation

Desire

 

Context

   

And the variational encoder ratio for optimal mixing of the networks?

Section

Technologies

Percept

Variational auto-encoder. Maximal representation of externality. Normalization average.

Estimation

Time evolution feedback via calculus operators.

Context

Produce genetic algorithm modification for estimation feedback.

Situation

Variational auto-encoder with post convolution or ideal order factorization of variation and causation tree.

Desire

Threshold action sequencer. Classifier with threshold.

The unity of consciousness as that identified with the knowing of multiple action paths in the imagination as capable of altering a future percept and certainty in achievment of a happy context and situation.

This extends on to the idea of emotive functor attractors as the controlled mechanism for genesis of output from the actional desire. This separates desire as an actional devoid of emotion, in complex with a driving emotion set. What has become of the splurge of biological evolute on the smudged cross product? Does it really assist functional understanding of the power efficiency of self action?

The situation analyser in performing a domain factorization, applying a feedback and estimation of a rule and a correlative later situation could in principal assist with modelling from rule followed by implication of rule. The Gödel incompleteness of the inferred logic controlled by “your stupid” and the implicant “fix yourself” as a splurge cull.

The convergence of the multiple series for different integral forms have bounds. These could be considered some sophisticated parallel to attractor convergence in fractals. As they have a possible intersection as well as a pseudo digital behaviour (time analytic of halting problem applied to divergence) they can be used to represent some digital manifold, while maintaining series differentiability. This implies c(y) and f(c(y)) more importantly be fed back to the estimation.

The separation of the percept before the estimation in a real sense is the great filter. Some post situation feedback would help. The log scaling is perhaps also quite important. Considering an exponential half life maybe controlled by production of an enzyme to remove the metastable precursor to reduce it, the multiplicative inverse is quite likely (Newton-Raphson approximant) and integration make for a log scaling possibility. Some feed forward of x provides entropy and some exponentiation or other series decompositions might be useful.

Featured

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

VCV Rack Again

Now that VCV Rack virtual modular synthesizer has stabalized I tried it out for developing C++ modules to add in to the rack. Haven’t decided on the exact nature of the modules yet (2021-7-14), but it does work for the developer better than the older version. First a full source compile is not required. There still appears to be some issues with control alignment in the module GUI which is auto generated which is easily fixed by adding 28 to the Y coordinate of the control. My advice is leave the control “circles” visible until the actual controls cover where you intend them placed.

KRT Plugin A is the repository. So module A seems to be a filter with some strange DSP and input option for a HPF/LPF ring modulation with a metalic on the corner frequency offsetable from the main Q frequency but with tracking added on independantly. Is this the most fun that can be had with 4-poles? An electronic DSP filter joke. So the test basically works, but some DC on none HPF path, but as HPF is by inverse LPF and DC cancels, there must be some DC injected somewhere. OK, found the obvious error at last. Filter A is finished.

And now polyphonic with SIMD. This makes it about 7 times more efficient. Now has graphics. Developing module μ for calculus purposes, then onto a few other niceties. module μ is finished too. Any errors in the calculus should be reported as this module is about a calculated sound, and errors sounding better are for other modules.

On version 1.2.3 already 😀 as it even took this long to define a suitable versioning number system. At the moment controls are virtually CNC’ed on a grid, and panel graphics are manually kerned (as auto(Tm)ating this is perhaps overkill. v1.3.4 release now include the fat T with some bad disharmony as well as lovely 4th and 5th sync sounds and stuff.

L;D and R are now in planning. And are completed for the 1.6.9 release (2021-07-27). Maybe some speed optimizations and next a more complex module. A nice website is also being made in markdown here as that’s what is expected by default .json tags.

The 1.8.13 release (2021-8-2) includes 8 working modules with the new ones being Ω (a clock distributer with randomness) and V (a VCA triplet). The 1.9.15-rc2 relaease (2021-08-05) includes F a morph filter. Hopefully goes live soon soon when compiled by VCV. The Y gate sequencer is almost ready for 16 channels of triggers.

15 machines up there now (2021-8-19) including some oversampled ones, and some utilities for helping out with problems you didn’t know you had. One minor fix for the F filter will be in the next release, and some slight improvements, plus another 3 modules. 

 

Templigeadicalogical Algebra

Well, where shall I start? All could see the sums were good, of those taught sums a protected right of the conversationally on. A heavy reign introduction to subtraction and hence divisional and rights of subdivision consensus elective protection from decimation.

Command hierarchies of the on for example a battle unit knowing it is one that is on, set to fight for subfeariors of command wants subjunct summative transmissive networks of optimization feedback of induction of sums?

The out wave of a famine “un” the handy past the time of fight show lean on the order of meditative? As the collective induced cook as woman work to collect the connection of hand fights to womb growth multipliers.

This might be fun at the food policy unit 😀

Fight coming up from blame of “monk” to obivatiate doubt on family protection, and thrust occasional mindsets to perimeters of risk reduction from onslaught foresightful of the time to tummy from mummy. As the di summand moved predecimand, the focus on god cycle before analytic deconstruction by sumandment of temple duty, became moot as the knowledge collapse in chaotic cycles not brought into feedback bode stability as the control hierarchy became argumentative dominant.

Bun fight!!!! Let then ate cake?? I’m ‘avin a go at integrands, might PID control feedback stuff if the boding camlmand hireachies summand with better cross information flow? Sumands like ? The first L of simon. Technically though it is a second order tension of l fighty fiesty hg an onset of bode instability so yicked up set delguage from a fWell, via a dissociable epigenetic panic.

Prove me hyper politely wrong on the abuse that extends from the fear of critique. I’m on, some of the nicest people I know are women. There on, but maybe not on on in the wordfield. The uncertainty potential of action in fold downs of understanding? I can say being a man I understand testosterone. The idiomatic fork as well extends from this therefore the competition between communicative and full active fully automatic via lack of information has its inductive effect.

Anti June could swear on summand a sailor! Error analysis in.  Idiomatic jokes are always a shitter. Control yourshelbves bint dat ladies. First orders, second orders power orders, summands and seais so ship? Lndend? Trade …

Accents for the poor? Accination programs? The enlightenment of the orbifold tonces. The dictum freeze from Oxford, an experiment in analytical management by saturnalian net. A distribute of multi-lingo automation?  Distribute, estimate, summand, perform error control minimization. Unlimnate uncertainty of position.

Parse Buffer Overflows? Dark Priorities.

Sounds like such fun. An irremovable or a point update fix on the press? https://github.com/jackokring/majar/blob/master/src/uk/co/kring/kodek/Generator.java sounds like fun too. Choices, choices? Amplified radial uncertainty of Δr.GMm.Δt≤ℏ.r2/2 was kind of the order of last night. Is it dark matter? Is tangential uncertainty in the same respect part of dark energy? The radial uncertainty in a sure instant of time, and the potential gravitational energy? A net inward force congruent with dark energy?

And a tangential version of the squared hypotenuse of radius and tangential uncertainty of radius resultant? That leads to a reduction of gravity at a large radius and is more like dark energy. More evidence for a spectrum of uncertainty amount hence the “less than equals” being simplistic on an actuality?

Oh, no I’ll have to investigate the last GET/POST before errors … how boring (last time an Indian) … guess who?

The Small Big G and Why Gravity?

As G the gravitational constant is small compared to other force constants this would make delta r be bigger in gravity for the same amplified ħ uncertainty. With the time accuracy of light arrival in the visible range, the radial uncertainty at a high radial distance integrates over the non-linearity of the 1/r^2 force, for a net inward. Tangentially, the integral would net a reduction in gravity.

Δr.GMm.Δt≤ℏ.r2/2

So a partial reason for dark matter and dark energy to be explained by quantum gravity. It’s a simple formula and Δv/Δt as a substitute for Δp=mΔv using F=ma=GMm/r2 in ΔxΔp≤ℏ/2 so the answer is approximate an r±Δr might be more appropriate for exacting calculations, and r2+Δr2 as a tangential hypotenuse.

As https://en.wikipedia.org/wiki/Coulomb%27s_law is 20 orders of magnitude higher the dark coulomb force will be 10 orders of radius larger for the same effect.

As the Mass by the Cube, and the Uncertainty by the Square.

As the distance increases to the centre of a gravitational lens, the uncertainty of the mass radially becomes significant so effectively reducing the minimal acceleration due to gravity, and growing the volume bulk integral of mass in uncertainty. The force delta would be inverse cubic, countered by the cubic growth in integration volume. The force would therefore in isotropy become a fixed quantity effect.

This is not even considering the potential existence of a heavy graviton, or the concept of conservation of a mass information velocity that would have a dark energy effect. It still seems “conservation of acceleration” is not even a taught effect considering there are many wine glasses that would have loved to know about it.

As for the rapid running constant increase toward the unification energy and what inner sun horizons would do to a G magnification? Likely not that relevant? Only the EM force seems to increase in coupling as the energy of the system dilates in time. This would imply the other three standard forces decrease, so necessitating an increase in radial uncertainty on average. The strong force has a with distance effect below the confinement distance, and so as the radius reduces, a Δr.k.Δt≤ℏ/2r rule is likely which would lead to the most likely reciprocal isomorphism of dark matter and dark energy.

Due to quark mass differences, and k, therefore, being one of 15 = 6*(6-1)/2 constants depending on the quark pair a triad product pentad structuring of force to acceleration might occur, with further splitting by boson interactions with quarks. Maybe this is a long shot to infer the finality on the low energy quark set of 6. Likely a totient in there for an 8. That’s all in the phi line and golden, silver and forcing theorems. I wonder if forcing theorems have unforcing and further forcing propergatives?

≤?

You could be right.  So? It’s not as though it affected any of the local accelerators I don’t have. If it’s all about the bit not understood, then as a product constraint, it is where the action is at. As the maths might work, I am speculating the further equations will be in a less than form and so need fewer corrections? Premature optimization is the root? Any tiny effect would be on that side of equality perhaps. Maybe it was just a tilt on the suggestion of an inverse isomorphism. I couldn’t say, but that’s how it exited my mind.

Project “majar”

majar is a Java package. It is in development. The current focus is on a shell language. Also, abstraction interfaces for things will arrive, and then implementations.

  • KeyBase – a database based on the idea of a “BulkStream” (a Base) supporting a 5th normal form of relational database where internally everything inherited from Key can be stored. Fields are not stored in records, but each field value becomes a record so an object storing kind of database. Imagine a field query, and then obtain records for all tables and databases.
  • Kodek – the KODEK of K Ring naming fame. I’m sure I’ll get around to a KeyBase Store specialising in a Kodek supporting Class.
  • majar (intentionally lower case) – a scripting language with a bash launch script so that majar becomes the language from “ma.jar”, get it? 
  • Abstracting the script language to run web applications.
  • A Java Servlet extension class for easing some of the pain.
  • A port 287 IP proxy for localhost compression to the publication of “My Public Computer” interface. Also an application server for the browser client.

K Ring CODEC Existential Proof

When p=2q. L(0) is not equal L(1).

Find n such that (L(0)/L(1))^(2n+1) defines the number of bias elements for a certain bias exceeding 2:1. This is not the minimal number of bias elements but is a faster computation of a sufficient existential cardinal order. In fact, it’s erroneous. A more useful equation is

E=Sum[(1-p)*(1-q)*(2n-1)*(p^(n-1))*q^(n-1)+((1-p)^2)*2n*(q^n)*p^(n-1),n,1,infinity]

Showing an asymmetry on pq for even counts of containment between adding entropic pseudo-randomness. So if the direction is PQ biased detection and subsample control via horizontals and verticals position splitting? The bit quantity of clockwise parity XOR reflection count parity (CWRP) has an interesting binary sequence. Flipping the clockwise parity and the 12/6 o’clock location inverts the state for modulation.

So asymmetric baryogenesis, that process of some bias in antimatter and matter with an apparently identical mirror symmetry with each other. There must be an existential mechanism and in this mechanism a way of digitizing the process and finding the equivalents to matter and antimatter. Some way of utilizing a probabilistic asymmetry along with a time application to the statistic so that apparent opposites can be made to present a difference on some time presence count.

Proof of Topological Work

A cryptocoin mining strategy designed to reduce power consumption. The work is divided into tiny bits of work with bits of stall caused by data access congestion. The extensive nature of solutions and the variance of solution time reduce conflict as opposed to a single hash function solve. As joining a fork increases splitting of share focuses the tree spread into a chain this has to be considered. As the pull request ordering tokens can expire until a pull request is logged with a solution, this means pull request tokens have to be requested at intervals and also after expiry while any solution would need a valid pull request token to be included in the pull request such that the first solution on a time interval can invalidate later pull requests solving the same interval.

The pull request token contains an algorithmic random and the head random based on the solution of a previous time interval which must be used to perform the work burst. It, therefore, becomes stupid to issue pull request tokens for a future time interval as the head of the master branch has not been fixed and so the pull request token would not by a large order be checksum valid.

The master head address becomes the congestion point. The address is therefore published via a torrent-like mechanism with a clone performed by all slaves who wish to become the elected master. The slaves also have a duty to check the master for errors. This then involves pull-request submissions to the block-tree (as git is) on various forks from the slave pool.

This meta-algorithm therefore can limit work done per IP address by making the submission IP be part of the work specification. Some may like to call it proof of bureaucracy.

The Cryptoclock

As running a split network on a faster clock seems the most effective hack, the master must set the clock by signed publication. On a clock split the closest modulo hashed time plus block slave salt wins. The slave throne line is on the closest modulo hashed values for salt with signed publication. This ensures a corrupt master must keep all slave salts (or references) in the published blocks. A network join must demote the split via a clock moderation factor. This ensures that culling a small subnet to run at a higher rate to disadvantage the small subnet is punished by the majority of neutrals on the throne line in the master elective on the net reunion, by the punitive clock rate deviation from the majority. As you could split and run lower in an attempt to punify!

Estimated 50 pounds sterling 2021-3-30 in bitcoin for the company work done 😀

The Rebase Compaction Bounty (Bonus)

Designed to be a complex task a bounty is set to compress the blockchain structure to a rebased smaller data equivalent. This is done by effectively removing many earlier blocks and placing a special block of archival index terminals for non-transferred holdings in the ancient block history. This is bound to happen infrequently to never and set at a lotto rate depending on the mined percents. This would eventually cause a work spurt based on the expected gain. The ruling controlling the energy expenditure versus the archival cost could be integrated with the wallet stagnation (into the void) by setting a wallet timeout of the order of many years.

A form of lotto inheritance for the collective data duplication cost of historic irrelevance. A super computational only to be taken on by the supercomputer of the age. A method therefore of computational research as it were, and not something for everybody to do, but easy for everybody to check as they compact.

An Open Standard for Large Event COVID Passports?

The POX Algorithm RFC. How to show an auth token when you have privacy but no booking or other door duty. The phone occluded xenomorph algorithm. A complex cypher to protect data at all points in transmission. What really gets shown is an event-specific checksum verify on some encrypted data with can be further queried by a provider (such as the NHS) to obtain validity and scope for event purpose on a statistical check basis to reduce server traffic load and focus on hot areas.

At 2953 bytes of data capacity in a QR barcode (23624 bits) there is enough scope for a double signature and some relevant data in escrow for falsification auditing. The following data layers are relevant with keys in between.

  • Verify credential entry VCE (the blind of public record customs inquiries)
    • validity decrypt key (event private key part) VDK QR
  • Door event transit DET (the over the shoulder mutable) QR
    • event encrypt key (event public key) EEK QR
  • Phone independent ephemeral PIE (the for me check)
  • A public blockchain signed hashed issue SHI (the public record) QR
    • authority signature keys (the body responsible for a trace of falsifications)
    • hashed phone number key (symmetric cypher)
    • record blind key (when combined with the event private key part makes the effective private key. Kept secret from the event)
    • confidentiality key (database to publication network security layer)
  • Actual data record ADR (the medical facts)

Various keys are required but covering the QR codes needed is perhaps better.

  • The manager VDK QR (given to the door manager)
  • The issue SHI QR (given by the provider)
  • The event EEK QR (posted online or outside the event)
  • The entry DET QR (made for the bouncer to scan)

At the point of issue, there may be a required pseudo-event to check that all is working well. The audit provider or provider (such as the NHS) has enough data on a valid VCE to call the user and the event in a conference call. Does the credential holder answer to speak to an echoing bouncer? Does the provider send a text?

Nitro Bacon COVID Hypothesis

So it seems there is a larger fraction of ethnic dead in the actuary of covid in the UK, and it does not seem to be genetic. This leaves environmental causation. I posit that Bacon and other nitro curing salted meat products are eaten in a larger amount by the sections of populous recording a lower than average actuarial death rate.

The proposed mechanism of action for this effect is through lifting blood pressure by consumption of nitro curing salts and so effecting a partial closure of the ACE2 receptor such that the infection affinity of the covid spike protein is reduced.

Dietary intakes of at-risk population sectors include a reduced-sodium and processed meat intake as a medical diet and may indicate that further research is required on gathering salted preserve intake versus ICU outcome.

I am quite surprised that many apparently random statistics are not captured on the off chance that significance may be shown. It is hardly a problem for the central limit theorem to be applied when the actuary exceeds 100000.

A likely non-chatty and a few more dead seems a better telly for the masses or not? It an’t even cosmically possible to solve language puzzles theses days. A word starting with N and ending in G. can lead to a 24 hour Facebook ban. It could best be expressed by saying Obama didn’t have tits. I appealed, but luckily or not due to covid the bums are not on seats at this time or such gatekeeping. Maybe they all ironically died due to lack of nitro salts? I wonder if the pearly gates they may or may not love has a shoot to hell policy?

Still a few hours before I can create a Facebook group “Borg Unimatrix Thought Distribution Node” for maximal profit. Borg is likely offensive to the Borg as maximal entropy of algorithmic production would likely be higher on the list if elimination of the surplus to requirement individuals was not placed so high.

Medi-ochre and society as corruption lowers society, the leaders can’t help but choose from lesser options and become the pictures of their own making.

Xenozootic Virology

The limited but perhaps influential evidence that covid might have started as an unnoticeable viral cross infection into humans  (Italian smoker study and some Chinese ideals), may be responsible for the 1/3rd no symptom transmission, as it might be possible the Wuhan strain was just a mutation of the unnoticed base virus which became, even more, infectious and had a greater severity.

This knowledge might indicate an occluded outbreak which being of low infectivity and unnoticeable severity might have already travelled enough of the world to infect about 30% of the world’s population so providing some kind of cross-immunity with the Wuhan strain. For all that I know it could have started with some guy called Keith in Hackney Downs.

A backtrace on the per cent of nonsymptomatic in area density across the globe may have indicative potential on the origination of the Pangolin Mary coming into contact with the occluded strain. Although factoring in the kissy romance of the Italian greeting would have to be used to normalize the neutral expectation of transmission under occlusion along with other societal locale idioms. Such things would potentially affect the nonsymptomatic occluded rate compared to the covid hospitalization rate, and hence be estimable to some extent.

The study of R0 unbiased via lockdown percolation along with critical actuarial induction of lockdown would lead to likely numbers on the binding affinity … blah, redacted. **** ****** ** …

I mean like 30% might be one of those 30/70 behaviourisms via some genetic activation, providing pre-MHC preferential or J section locations of activity.

Free Form Thoughts

A Classic Movie Voice Over

And so did the cutter of stone from the sky release the priest of his knowledge of lack of contact such that a stone cold comparison could be seen, and such that it meant that he still would still not know a hug.

And it became decided that the balance between overtaking the lessers versus timed up greaters as an order for the taking sensing a taked in the mistook, all because analytic in speed of absorption, such that little to as much was done.

How to tell the apprentice from beyond thu execution and what of the touchy humours?

And as the unity lowered with the cut words “different cutter” as they appeared. From this a division of opinion ended in more than a happen-seat. And so it was and might is a mighty word.

The multi-cutural (noel) was seen perhaps ower to the hives of man and fortuatous gods or sub-gods. Then what could be done? Why would they prey upon an idol god for it was upon the nature of being that action did perform some or a difference upon tribes and detribulates. If the payment is freedom then what is it to be holden to a duty?

Bode, bode and thrice bode that minus one is a bitch. Obvious dick in womb joke and all. All bar one off course. Yes, an extra-oneous F. Rise again dear cheapo.

And as he placed ring finger of his fishy right hand upon the pre-chopped and processed tree stump, declaring “take it and fuck off”, all was a bit more cagey and costing of those that never get told of the prices of alternate labour avoidance for profit.

Nice story so far dear observer. I think you’d like a little titillation for your money now. Bring forth babe percents and vital statistics.

What a placement of mind in such a being of knowledge. What could become? What it for removals of of thing never cast, never worried, never done.

In the be ginning. A shrrod ploy to an ends. As all became seated and thrust needed no explanation.

All the Too Messy for Sci-Fi Complaints

Assuming GPT-3 is really good at story completion how can anyone say that errors in word sequencing are irrelevant for the provocation phrase issued to an AI when the purpose is completion from the source through sense and not the generation of a more precise bore?

Although the mathematics of a form of complexity may be essential, the actual origin of the mathematics might not be as essential as a way of introducing the definitive emergents as one would assume. Multiple originations of emergence isomorphism in the completeness of behaviour might and likely are possible.

The latest AI joke is about the Silly can’ts versus the car bonned. Oh, dear. 

Gradients and Descents

Consider a backpropagation which has just applied to a network under learning. It is obvious that various weights changed by various amounts. If a weight changes little it can be considered good. If a weight changes a lot it can be considered an essential definer weight. Consider the maximal definer weight (the one with the greatest change) and change it a further per cent in its defined direction. Feedforward the network and backpropagate again. Many of the good weights will go back to closer to where they were before definer pass and can be considered excellent. Others will deviate further and be considered ok.

The signed tally of definer(3)/excellent(0)/good(1)/ok(2) can be placed as a variable of programming in each neuron. The per cent weight to apply to a definer, or more explicitly the definer history deviation product as a weight to per cent for the definer’s direction makes a training map which is not necessary for using the net after training is finished. It does however even further processing such as “excellent definer” detection. What does it mean? 

In a continual learning system, it indicates a new rationale requirement for the problem as it has developed an unexpected change to an excellent performing neuron. The tally itself could also be considered an auxiliary output of any neuron, but what would be a suitable backpropagation for it? Why would it even need one? Is it not just another round of input to the network (perhaps not applied to the first layer, but then inputs don’t always have to be so).

Defining the concept of definer epilepsy where the definer oscillates due to weight gradient magnification implies the need for the tally to be a signed quantity and also implies that weight normalization to zero should also be present. This requires but has not been proven as the only sufficient condition that per cent growth from zero should be weighted slightly less than per cent reduction toward zero. This can be factored into an asymmetry stability meta.

A net of this form can have memory. The oscillation of definer neurons can represent state information. They can also define the modality of the net knowledge in application readiness while keeping the excellent all-purpose neurons stable. The next step is physical and affine coder estimators.

Limit Sums

The convergence sequence on a weighting can be considered isomorphic to a limit sum series acceleration. The net can be “thrown” into an estimate of an infinity of cycles programming on the examples. Effectiveness can be evaluated, and data estimated on the “window” over the sum as an inner product on weightings with bounds control mechanisms yet TBC. PID control systems indicate in the first estimate that differentials and integrals to reduce error and increase convergence speed are appropriate factors to measure.

Dynamics on the per cent definers so to speak. And it came to pass the adaptivity increased and performance metrics were good but then irrelevant as newer, better, more relevant ones took hold from the duties of the net. Gundup and Ciders incorporated had a little hindsight problem to solve.

Fractal Affine Representation

Going back to 1991 and Micheal Barnsley developing a fractal image compression system (Iterrated Systems FIF file format). The process was considered computationally intensive in time for very good compression. Experiments with the FIASCO compression system which is an open-source derivative indicate best performance lies in low quality (about 50%) is very fast, but not exact. If the compressed image is subtracted from the input image and further compressed as a residual a number of times, performance is improved dramatically.

Dissociating secondaries and tertiaries from the primary affine set allows disjunct affine sets to be constructed for equivalent compression performance where even a zip compression can remove further information redundancy. The affine sets can be used as input to a network, and in some sense, the net can develop some sort of affine invariance in the processed fractals. The data reduction of the affine compression is also likely to lead to better utilization of the net over a convolution CNN.

The Four Colour Disjunction Theorem.

Consider an extended ensemble. The first layer could be considered a fully connected layer distributor. The last layer could be considered to unify the output by being fully connected. Intermediate layers can be either fully connected or colour limited connected, where only neurons of a colour connect to neurons of the same colour in the next layer. This provides disjunction of weights between layers and removes a completion upon the gradient between colours.

Four is really just a way of seeing the colour partition and does not really have to be four. Is an ensemble of 2 nets of half size better for the same time and space complexity of computation with a resulting lower accuracy of one colour channel, but in total higher in discriminatory performance by the disjuction of the feature detection?

The leaking of cross information can also be reduced if it is considered that feature sets are disjunct. Each feature under low to non detection would not bleed into features under medium to high activation. Is the concept of grouped quench useful?

Query Key Transformer Reduction

From a switching idea in telecommunications, an N*N array can be reduced to a mostly functional due to sparsity N*L array pair and an L*L array. Any cross-product essentially becomes  (from its routing of an in into an out) a set of 3 sequential routings with the first and last being the compression and expansion multiplex to the smaller switch. Cross talk grows to some extent, but this “bleed” of attention is a small consideration given the fact that the variance spread of having 3 routing weights to product up to the one effective weight and computation is less due to L being a smaller number than N.

The Giant Neuron Hypothesis

Considering the output stage of a neuronal model is a level sliced integrator of sorts, the construction of RNN cells would seem obvious. The hypothesis asks if it is logical to consider the layers previous to an “integration” layer effectively an input stage where the whole network is a gigantic neuron and integration is performed on various nonlinear functions. Each integration channel can be considered independent but could also have post layers for further joining integral terms. The integration time can be considered another input set for per integrator functional.  To maintain tensor shape as two inputs per integrator are supplied the first differential would be good also especially where feedback can be applied.

This leads to the idea of the silicon conectome. Then as now as it became, integration was the nonlinear of choice in time (a softmax divided by the variable as goes with [e^x-1]/x. A groovemax if you will). The extra net uninueron integration layer offering the extra time feature of future estimation at an endpoint integral of network evolved choice. The complexity of backpropagation of the limit sum through fixed constants and differentiable functions for a zero adjustable layer insert with scaled estimation of earlier weight adjustment on previous samples in the time series under integration for an ideal propergatable. Wow, that table’s gay as.

This network idea is not necessarily recursive, and may just be an applied network with a global time delta since last evaluation for continuation of the processing of time series information. The actual recursive use of networks with GRU and LSTM cells might benefit from this kind of global integration processing, but can GRU and LSTM be improved? Bistable cells say yes, for a kind of registered sequential logic on the combinationals. Consider that a Moore state machine layout might be more reductionist to efficiency, a kind of register layer pair for production and consumption to bracket the net is under consideration.

The producer layer is easily pushed to be differentiable by being a weighted sum junction between the input and the feedback from the consumer layer. The consumer layer is more complex when differentiability is considered. The consumer register really could be replaced by a zeroth differential prediction of the future sample given past samples. This has an interesting property of pseudo presentation of the output of a network as a consumptive of the input. This allows use of the output in the backpropergation as input to modify weights on learning the feedback. The consumer must be passthrough, in its input to output while storage of samples for predictive differential generation is allowed.

So it’s really some kind of propergational Mealy state machine. A MNN if you’d kindly see. State of the art art of the state. Regenerative registration is a thing of the futured.

Post-Modern Terminal CLI

As is usual; with all things computing, the easy road of bootstrap before security is just an obvious order of things. It then becomes a secondary goal to become the primary input moderation tool such that effective tooling brings benefits while not having to rely on the obscurity of knowledge. For example a nice code signature no execution tool where absolutely no code even becomes partially executed if the security situation indicates otherwise.

A transparent solution is a tool for development which can export a standard script to just run within today’s environment. As that environment evolves within the future it can take on the benefits of the tool, so maybe even to the point of the tool being replaced purely by choice of the user shell, and at a deeper level by a runtime replacing the shell interpreter at the system level.

The basic text edit of a script at some primary point in the development just requires a textual representation, a checksum in the compiled code which is in a different file and a checksum to allow a text override with some security on detecting a change in the text. This then allows possible benefit by a recompile option along with just a temporary use of the textual version. It won’t look that hard in the end with some things just having a security rating of “system local” for a passing observer.

ANSI 60 Keyboards? And Exception to the Rule?

More of an experiment in software completion. Jokes abound.

A keyboard keymap file for an ANSI 60 custom just finished software building. Test to follow given that cashflow prevents buy and building of hardware on the near time scale. Not bad for a day!

A built hex file for a DZ60 on GitHub so you don’t have to build your own with an MD5 checksum of 596beceaa446c1f1b55ee5e0a738f1c8 to verify for duelling the hack complexity. EDIT: version 1.7.2F (Enigma Bool Final Release). Development is complete. Only bug and documentation fixes may be pending. 

It all stems from design and data entry thinking, and small observations like the control keys being on the corners like the thumbs to chest closeness of baby two-finger hackers instead of the alt being close in for the parallel thumbs of the multi-finger secretariat.

The input before the output, the junction of the output to our input. It’s a four-layer main layout with an extra for layers for function shift. Quite a surprising amount can be fit in such a small 60 keyspace.

The system allowing intercepts of events going into the widget yet the focus priority should be picking up the none processed outgoings. Of course, this implies the atom widget should be the input interceptor to reflect the message for outer processing in a context. This implies that only widgets which have no children or administered system critical widgets can processEventInflow while all can processEventOutflow so silly things have less chance of happening in the certain progress of process code.

Perhaps a method signature of super protected such that it has a necessary throws ExistentialException or such. Of course, the fact RuntimeException extends Exception (removing a code compilation constraint) is a flaw of security in that it should only have allowed the adding of a constraint by making (in the code compile protection against an existential) Exception extending RuntimeException.

Then the OS can automatically reflect the event unhandled back up the event outflow queue along with an extra event with a link to the child in, and an exposed list of its child widgets) to outflow. An OrphanCollector can then decide to still show the child widgets or not with the opportunity of newEventInflow. All widgets could also be allowed to newEventOutflowForRebound itself a super protected method with a necessary throws ExistentialException (to prevent injection of events from non administered. widgets).

An ExistentialException can never be caught in user code to remove the throws clause and use of super try requires executive privilege to prevent executive code from being loaded by the ClassLoader. It could run but in a lower protection ring until elevated.

Accounts Year End 2020

No trading this year, payments in by director to cover bank charges and web services. Quite a year of nothing much happening on the contract front. I think COVID has had a vicious effect on many companies capital, but as the company has no creditors, there is no worries of being up against the wall this year.

An Interpolation of Codecs into the ISO Network Model

  1. Paper
  2. (Media Codec)
  3. Symbols
  4. (Rate Codec)
  5. Envelope
  6. (Ring Codec) 3, 2 …
  7. Post Office
  8. (Drone codec)
  9. Letter Box
  10. (Pizza codec)
  11. Name
  12. (Index codec)
  13. Dear

Considering the ISO network model of 7 layers can be looked at as an isomorphism to a letter delivery with Paper being the lowest hardware layer and Dear being the application layer, there is a set of 6 codecs which transform layer to layer and so a more exacting 13 layer model is just as obvious given the requisite definitions.

There also would exist a Loop Codec which would virtualize via an application a container of a virtual hardware layer on which another stack of 13 could be founded.

23

The classic 3*4+1+1+4+(9-1)/2+[this one @23rd]+(9-1)/2. For a total of 27. The whole 163 and x^2-x+41 Technetium (+2) connection. Interesting things in number theory along with sporadic groups and J4 which is the only one with an ordered factor of 43 and an 11^3. Promethium at 61 is connected somehow maybe by 12 * 62 = 744 with something not doing the 10 “f-orbitals” thing, and 23 comes in on the uniqueness of factorization too along with 105.  Along with the 18 families of groups 26(or 27)+18 = 44(or 45) in cubic elliptic varieties of the discriminant.

26 letters in the alphabet plus space? Rocks with patterned circles on an island? Considering one of the 44 is the circle integer modulo ring with no “torsion” then there is kind of 43 bending varieties and some kind of dimension null over a double bend “cover” inclusion as a half factor of one of the main 18 sequence groups. Likely a deep connection to factor square-free “Mobius mu” and topological orientability.

Polynomial Regression Estimators

Consider a sampled sequence of n samples and an interpolation of order n. The sample sequence can be differentiated by backward and forward differences of all n samples to make a first differential sequence of n elements or more. This too has a polynomial fit. The polynomial can be integrated to make an order n+1 polynomial with a new constant which can be estimated by a regression fit of the n samples. This can then make an n+1 th estimation to show a fit ad infinitum. Weighting the regression error based on sample time locks more history and less prediction into the forecast but fits less on the predictive end. Opposite the forecast is based on a forecast not based on history. In between is a concept of optimal.

A genetic algorithm optimizing the weighting provides a fitness score based on future measured truth. The population spread acts as a Monte-Carlo and some selection for spreading entropy as well as future weight would input entropy flair for efficiency by the association of prediction clustering elimination and outlier promotion for risk estimates. An irony of population size and death by accounting in genetic algorithms weeds out some ”bum notes’ ‘ but “right on” in the ill computed silicon heaven (via Lobb’s theorem of truth by confirmed assumption). Hence an eviction cache as in silicon hardware. What measures the crash instability of markets in the recession local optimum?

Yes, I do imply logic machines are operating reality. I do not think all the machines use the same operator algebra. Some algebras survive, some do not. There is nothing in the closure complexity of efficient algebras supporting the accumulation of axioms as leisure free from a suppressed fight.

And Physics

The number of light bosons stems from the cyclotomic of 18 (divisors 1, 2, 3, 6, 9, 18 and new roots 1, 1, 2, 2, 6, 6) for 18 normal bosons (6 free ones as 18-12 [not fermion bound], sounds like some regular “found bosons”) and if the equality of the mass-independent free space view to zero is just an approximation to the reciprocal of a small oscillation then a differential equation for such is just scaled by units of Hz2 and having which would place the cyclotomy at 20 (divisors 1, 2, 4, 5, 10, 20 and new roots 1, 1, 2, 4, 4, 8) for 20 dark bosons perhaps? Or maybe it works inversely for reducing the cyclotomy to 16 (divisors 1, 2, 4, 8, 16 and new roots 1, 1, 2, 4, 8) or 16 dark bosons?

Or “free dark bosons” at a tally of 2 (or -2)? I think I used η with a floating ~ (tilde) to indicate this secondary oscillation. Fermi exclusion unique factor domain expansion? Non-unique compaction “gravity”?

What tickles my mind is the idea of 2 “ultra free dark bosons” as an idea. Put another way <<So this Pauli exclusion of fermions. If bosons (some of them as theoretical) confine and attach to fermions giving them a slightly less than expected Pauli exclusion when confined. Does this imply a kind of “gravity-like” force? If the bosons exist in a Q[√-23] field or do the “a de Moivre number and p is a prime number. Unique factorizations of cyclotomic integers fail for p > 23.” provide dark energy like effect as all below 24 have more Pauli exclusion of state due to lack of degenerate factorization of a 23 particle “super-force”?>>

But 20, and an inverse of the Hz2 (+2,-2) => (*Hz2,/Hz2) @ ex for something like 23 is the prime larger than 20 itself an essential behaviour encompassing number, and 23 also is the prime less than 24 itself another essential behaviour encompassing number. Most exclusive field of 23 and a totient amongst many. So like the disjoint 23 feedback being maximal presents the most of its dark influence on dark, dark influence for zero black kinda dark.

15015 and 255255 on the Beyond

The peaks within and without crossing the R0 of gain into implementation in reality. Comprehensive ring gates and information transport and regenerative bits held fast by tallies of entropy. Rings within subsets in later fields may we walk into shining bright with the power of imaticity may we move toward imagionics and theory of technologies.

So the Hz2 must have come from somewhere. Equality of something being equal to a constant over the angular energy. An intuit that something with higher angular energy is more E=mc2 massive and has a greater boson intensity of flux. This multiplies with the bosonic cyclotomics to field-scale them. To keep within the small constant η if it is not zero but oh so close to it (relatively tiny and could be plank’s but this is not proven), the fermionic mass-independent factor has to shrink in scale by reducing velocities, accelerations and jerks making it more certain in nature maintaining the constancy of η. True enough it could be a simplistic gamble on the nature of energy density, or it could just be more flexible in quadrature of complex phase lead and lag shift from zero while still being “fast and loose”.

Free42 Android App Longer Term

A very nice calculator app. I’ll continue to use it. What would I change? And would I change what I’d changed? A fork with extras began and is in development.

  • I’d have a SAVE and LOAD with load varieties (LOADYLOADZ, LOADT for register and all stack registers higher if all 4 stack items are not to be restored along with LASTX) depending on restoring the right stack pattern after a behaviour which makes for first-class user-defined functions. SAVE? would return how many levels of saving there are.
  • Perhaps variables based on the current program location (or section). A better way of reducing clutter than a tree, while accessing the tree would need a new command specifying the variable context. This would lead to a minimal CONTEXT to set the LBL style recall context and use the THIS to set this context as per usual but without the variable in context clutter. A simple default to change the context when changing program space ensures consistency of being. In fact, nested subroutines could also provide a search order for an outer context. THAT could just remove one layer of the context, or more precisely change the current to the one below on the call stack such that THAT THAT would get the second nesting context if it exists. LSTO helps a little.
  • Some mechanics for the execution of a series term generator which by virtue of a modified XEQG (execute generator), could provide some faster summation or perhaps by flags a product, a sum, a term or continued fraction precision series acceleration.
  • Differential (numeric) and integral (endpoint numeric multiple kinds and all with one implicit bound of zero for constant at zero) algorithms that I would not reimplement them 😀 as I would like a series representation by perhaps an auto-generated generator. So XEQG would have a few cousins.
  • Although Mathematica solving might not give %n inserts for parameterizing a solution for constants, this does not prevent XEQG doing a differential either side sampling at high order and reducing it geometrically for a series estimation of the exact value. In terms of integral an integral of x^n.f(x) where n goes to zero provides the first bit of insight into integrals as convergent sets of series, with an exclusion NonconvergentAreaComplex[] on Godelian (made to make a method of solve fail) differential equations (or parts thereof). Checking the convergents of the term supplied to XEQG and cousins allows for sensible errors and perhaps transforms to pre-operators on the term provider function. SeriesRanged[] (containing an action as a function) list of for the other parts, with correct evaluation based on value, and how does this go multivariate? Although this looks out of place, it relates to series solutions of differential equations with more complex forms based on series of differentials. The integral of x.f(x)/x by parts as another giver of two more generators. The best bit is the “integral” from such a form is just evaluated at one endpoint (maybe subtraction for definite integrals) and as they include weighted series can be evaluated often by the series acceleration of a small number of differentials of the function to be integrated. The differentials themselves can be evaluated often accurately as a series converging as the delta is geometrically reduced with the improvements in the estimates being considered as new smaller terms in the series. So an integral evaluation might come down to (at 9 series terms per acceleration) about 2*90 function invocations instead of depending on the Simpson’s rule which has no series weighting to “accelerate” the summation. Also, integration up to infinity might be a simpler process when the limits are separated into two endpoint integrals as the summation over a limit to an estimation of convergence at infinity would not need as many conditional test cases on none, both and either one. As I think integrals should always return a function with parametric implicit constants, should not differentials return a parameterized function by default boolean the possibility of retrieving the faded constants? An offsetable self-recovery of diminished offset generic? SeriesRanged[Executive[]][ … ] 
  • Free42 Android
  • Perhaps an ACCESS command for building new generators (with a need to get a single generated) with a SETG (to set the generator evaluating ACCESS) and  XEQG can become just a set of things to put in SETG “…” making for easy generators of convergents and other structures. GETG for saving a small text string for nesting functions might be good but not essential and might confuse things by indirection possibilities. Just having a fixed literal alpha string to a SETG is enough as it could recall ACCESS operators on the menu like MVAR special programs (and not like INPUT programs). XEQG should still exist as there is the SETG combiner part (reducer) as well as the individual term generator (mapper) XEQG used for a variety of functions. This would make for easier operator definition (such as series functions by series accelerations or convergent limit differentials by similar on the reduction of the delta) without indirect alpha register calling of iterates.
  • A feature to make global labels go into a single menu item (the first) if they are in the same program, which then expands to all in the current program when selected for code management.
  • +R for addition with residual returning that fraction of the X that was not added to Y being returned in the X register and the sum returned in Y. This would further increase precision in some algorithms.

Rationale (after more thought and optimization)

  • Restoring the stack is good for not having to remember what was there and if you need to store it. Requires a call stack frame connection so maybe SAVE? is just call stack depth and so not required. (4 functions). LOAD, SAVE with some placing old loaded X into the last X with two commands before LOAD is called USE to indicate a stack consumption effect after restore and MAKE to leave one stack entry next lowest as an output.
  • Although local variables are good, in context variables would be nice to see. Clutter from other contexts is avoided or at least placed more keystrokes away from the main variables. This would also be easier to connect to the call stack frame. (3 functions) CONTXTTHIS and THATRCL tries CONTEXT before the call stack program associated variables. No code spams variables into other namespaces. STO stores into its associated variable space. This ensures an import strategy. The .END. namespace can be considered an initial global space so the persistence of its content upon GOTO . . is useful so XEQ “.END.” should always be available.
  • INTEG and SOLVE could be considered operators, but with special variables.  Separation of the loop to reduce on from the map function makes more general summation functions possible given single term functions. It would be more general to have 3 commands so that the reducer, the mapper and the variable to map could be all set, but is that level necessary? Especially since in use, a common practice of setting the reducer and applying it to different maps seems more useful. But consistency and flexibility might have PGMREDPGMMAP and MAPRED “var” for generality in one variable, with ACCESS in the reducer setting the right variable before executing the mapping. (4 functions).
  • Addition residual is a common precision technique. (1 function) +R.
  • I’d also make SOLVE and INTEG re-entrant (although not necessarily to a nested function call (a function already used in call stack frames stack check?)) by copying salient data on process entry along with MAPRED where the PGMRED set function can be used again and so does not need a nested reused check.
  • As to improvements in SOLVE, it seems that detection of asymptotes and singularities confuses interval bisection. Maybe adding a small amount and subtracting a small amount move actual roots but leave singular poles alone swamped by infinity. Also, the sum series of the product of the values and/or gradients may or may not converge as the pole or zero is approached.
  • Don’t SAVE registers or flags as this is legacy stuff. Maybe a quadratic (mass centroid) regression, Poisson distribution and maybe a few others, as the solver could work out inverses. Although there is the inconsistency of stack output versus variable output. Some way of auto-filling in MVAR from the stack and returns for 8 (or maybe 6 (XYZT in and X subtracted out, and …)) “variables” on the SOLVS menu? Maybe inverses are better functionality but the genericity of solvers are better for any evaluation. Allow MVAR ST X etc, with a phantom SAVE and have MRTN for an expected output variable before the subtraction making another “synthetic” MVAR or an exit point when not solving (and solving with an implicit – RTN and definite integrals being a predefinition of a process before a split by a subtractive equation for solving)? It would, of course, need MVAR LAST X to maybe be impossible (a reasonable constraint of an error speed efficiency certainty). (5+1 menu size). Redefinition of many internal functions (via no MVAR and automatic solver pre and postamble) would allow immediate inverse solves with no programming (SOLVE ST X, etc., with no special SOLVE RTN as it’s a plain evaluation). This makes MRTN the only added command, and the extra ST modes on the SOLVE and also a way of function specification for inbuilt ones.  The output to solve for can be programmatically set as the x register value when PGMSLV is executed and remembered when SOLVE is used next.
  • Register 24 is lonely. Perhaps it should contain weighted n, Σy but no it already exists. Σx2y seems better for the calculation of the weighted variance. That would lead to registers 0 to 10 being fast scratch saves. The 42 nukes other registers in ALLΣ anyway and I’d think not many programs use register 24 instead of a named variable. I’d be happy about only calculating it when in all mode, as I never switch and people who do usually want to keep register compatibility of routines for HP-41 code. Maybe PVAR for the n/(n-1) population variance transforms although this is an easy function to write by the user. A good metric to measure what gets added. Except for +R which is just looping and temporary variables for residual accumulation with further things to add assuming the LAST Y would be available etc.
  • I’d even suggest a mode using all the registers 0 to 10 for extra statistical variables and a few of those reserved flags (flag 64). I think there is at least 1 situation (chemistry) where quadratic regression is a good high precision idea. This makes REGS saving a good way of storing a stats set. Making the registers count down from the stats base in this mode seems a good idea. The following would provide quadratic regression with lin, log, exp and pow relation mapping on top of it for a CFIT set of 8 along with the use of R24 above. An extra entry on the CFIT MODL menu with indicator  for that enablement toggle of the extra shaping and register usage (flag 64 set) with an automatic enable of ALLΣ. As the parabolic constant would not be often accessed it would be enough to store it and the other ones after a fit, not interfering with live recalculation so as to not error by assumption. It would, of course, change the registers CLΣ sets to zero. Flag 54 can perhaps store the quadratic fitting model in mode. Quadratic Regression details. Although providing enough information to manufacture a result for the weighted standard deviation, it becomes optimal to decide to add WSD or an XY interchange mode on a flag to get inverse quadratic regression. Which would provide 12 regression curve options. The latter would need to extend the REGS array. FCSTQ might be better as a primary command to obtain the forecast root when the discriminant is square root subtracted negative as two forecast roots would exist. The most positive one would likely be more real in many situations. Maybe the linear correlation coefficient says something about the root to use and FCSTQ should use the other one?
    • R0 = correlation coefficient
    • R1 = quadratic/parabolic constant
    • R2 = linear constant
    • R3 = intercept constant
    • R4  = Σx3
    • R5 = Σx4
    • R6 = Σ(ln x)3
    • R7 = Σ(ln x)4
    • R8 = Σ(ln x)2y
    • R9 = Σx2ln y
    • R10 = Σ(ln x)2ln y
  • Flags still being about on the HP-28S was unexpected for me. I suppose it makes me not want to use them. The general user flags of the HP-41 have broken compatibility anyway as 11 to 18 are system flags on the HP-42S. There would be flags 67, 78, 79 and 80 for further system allocations.
  • I haven’t look if the source for the execution engine has a literal to address resolver with association struct field for speed with indirect handled by a similar manner, maybe even down to address function pointer filling in of checks and error routines like in a virtual dispatch table.
  • If endpoint integrals provide wrong answers, then even the investigation into the patterns of deviation from the true grail summate to eventually make them right in time. A VirtualTimeOptimalIngelCover[] is a very abstract class for me today. Some people might say it’s only an analytical partial solution to the problem. DivergantCover[] as a subclass of IngelCover[] which itself is a list container class of the type IngelCover. Not quite a set as removing an expansive intersection requires an addition of a DivergentCover[]. It’s also a thing about series summation order commutativity for a possible fourth endpoint operator.
  • MultiwayTimeOptimizer[ReducerExecutive[]][IngelCover[MapExecutive[]][]] and ListMapExecutiveToReturnType[] and the idea of method use object casting. And an Ingel of classes replaced the set of all classes.
  • I don’t use printing in that way. There’s an intermediate adapter called a PC tablet mix. The HP-41 was a system. A mini old mainframe. A convenience power efficiency method. My brother’s old CASIO with just P1 and P2 was my first access to a computational device. I’m not sure the reset kind of goto was Turing complete in some not enough memory for predicate register branch inlining.
  • ISO 7 Layer to 8 Layer, insert at level 4, virtualized channel layer. Provides data transform between transmit optimally and compute optimally. Is this the DataTransport layer? Ingel[AutomaticExecutive[]][].
    1. Paper
    2. (Media Codec)
    3. Symbols
    4. (Rate Codec)
    5. Envelope
    6. (Ring Codec) 3, 2 …
    7. Post Office
    8. (Drone codec)
    9. Letter Box
    10. (Pizza codec)
    11. Name
    12. (Index codec)
    13. Dear
  • Adding IOT as a toggle (flag 67) command in the PRINT menu is the closest place to IO on the Free42. Setting the print upload to a kind of object entity server. Scheduling compute racks with the interface problem of busy until state return. A command CFUN executes the cloud functions which have been “printed”. Cloud sync involves keeping the “printed” list and presenting it as an options menu in the style of CATALOG for all clouded things. NORM (auto-update publish (plus backup if accepted), merge remote (no global .END.)) and MAN (manual publish, no loading) set the sync mode of published things, while TRACE (manual publish, merge remote plus logging profile) takes debug logs on the server when CFUN is used but not for local runs. Merge works by namespace collision of local code priority, and no need to import remote callers of named function space. LIST sets a bookmark on the server.
  • An auto QPI mode for both x and y. In the DISP menu. Flag mode on in register 67. Could be handy. As could a complex statistics option when the REGS array is made complex. It would be interesting to see options for complex regression. As a neural node functor, a regression is suitable for propagation adaptation via Σ+ and Σ- as an experiment into regression fit minimization.

Minecraft Mod Development

Got distracted. It will work out fine as I move backwards and forwards between the this and DSDev. Version 1.16 of Minecraft now has a Forge to make mods. I had an idea and started on a mod. There are a lot of changes since I last dipped a toe in the water. It looks as though it will make many things easier.

After a little track into crafting recipes, simple potion time extensions and extra Redstone blocks, I could move onto more complex potions with new effects or maybe even mobs. Mobs are however less likely than other gameplay elements.

  • Enchantments – nice but seems like a stats and number modification game.
  • Mobs – quite a lot already, but mob AI looks like an interesting thing to improve.
  • Crafting – best with new blocks with uses.
  • Brewing – similar to enchantments but can have player Effects and there is much scope with Think Potion or Mundane Potion expansion.
  • Non Block Items – could have use (Food, Item Frames …) but would have to have utility and not just another thing to be of interest. I’m adding a Written Book for example.
  • Block Items – there are already many, but I find adding to Redstone blocks an interesting one for mechanisation and automation. Maybe new technology is possible? I remember writing a teleport chest a long while ago and now the Ender Chest has some of the same functionality but is better.

Nice after getting used to things like @ObjectHandler and other new things. Still a few assumptions in the documentation such as default loot tables for blocks. 

 

DSi Homebrew

I decided to start some DSi homebrew as a little fun project. Just looking into it it seems I can do a GL2D screen and a console with a keyboard quite easy. And then a little audio.

With a 128 kB texture, it looks possible to have about 512 (16 * 16) glyphs on the GL2D layer in 256 indexed colours. That should be good enough to start with. I suppose I’ll find out how to use multiple VRAM banks.

At the moment I’m stuck on this SD card not being recognized with various formatting. So I’ll have to get a nice new class 10 original one to check.

Open Code for an obvious game to come. Next to look at some auto animation and some 3D models for import.

So I’ve managed to work out somethings and got the memory pit overflow exploit working. The “unlaunch” installer does not install, and so it’s just keeping with the memory pit exploit whenever I need homebrew access.

There are still things to work out like why the MOD file does not run the next one, although this is more likely related to why the event loop only seemed to go through once. But that’s coding for another day.

So the generic menu is working. I’m still looking into why the switch back and forth between 2 and 3D on the main screen is resetting the image to magenta as it seems such things just set one register. A foreground sound automatic manner, and hooks into a game class seem to be logical next things to do.

This would make a game select something that could be placed in the options for maximizing utility of the 4MB limit. Finding a way of decompressing textures would also seem to free about 100kB, which is a lot in simple game designs.

To maximise sound utility, it might be possible to replace some of the sounds in the .mod files to be used in the game, as this seems to be possible to save on memory. I also must find out how to further reduce the file size of the .wav files.

So it seems I have about 500 kB for game logic and data excluding sound and graphics. I have defined various classes GameLogic (for generation), CTL (for control of the main machine loop), Audio (for triggering audio), BG (for background control) and Font (for 2D font overlays on the main display). This nicely abstracts the machine of all the setup and configuration.

Been doing graphics for a game idea. 8 by 8 is very tiny but fun. I seem to have 11 rows of 32 tiles left. I’m thinking of how to utilize this for best effect. I’m very likely to use genetic algorithms to make the AI effective. I have had some good ideas to abstract this into the enemy design.

DCS ASCII Map?

I think I might do a Ham radio licence. I’ve been thinking about it for a few weeks. It might be fun. I’ve been thinking of experimenting with using DCS squelch codes for data transmission of character streams. It should be possible using the 83 codes available with easy mapping.

023@ 114N 205r+ 306lf 4110 503: 703sp
025A 115O 223r- 311′ 4121 506; 712!
026B 116P 226g+ 315( 4132 516< 723″
031C 125Q 243g- 331) 4233 532= 731£
032D 131R 244b+ 343+ 4314 546> 732$
043E 132S 245b- 346, 4325 565? 734%
047F 134T 251up 351- 4456   743^
051G 143U 261dn 364. 4647   754&
054H 152V 263le 365/ 4658    
065I 155W 265ri 371\ 4669    
071J 156X 271dl        
072K 162Y          
073L 165Z          
074M 172*          
  174#          

 

 

This would be easy to integrate into a multipurpose app to connect on digital modes for a low bandwidth 300 baud signal at 23 bits per character. This would be quite reliable as a means of doing a more modern RTTY. Just leaves ` _ | and ~ in base ASCII to do later, with 20 (11-9) codes “free”. The 2xx and the 6xx lines. This gives the printable 63, and the 20 control characters with no print, along with a special control for inclusion in printing (dl for delete correction) for 83.

So the 2xx codes (non-destructive locators except “delete” the anti-time locator) are colour saturation and direction control with delete (which correction “time” dynamics perhaps in a 6-bit code), and the 6xx codes are where more complex things happen. A basis repetition rate for distance starts and the coding uses this as a basis to transmit on. So a basis of 16 repetitions means each symbol is sent 16 times, for a 1/16 data rate. 612 uses 2^n repetitions based on a log for the number of rp after the symbol to be repeated. 2, 4, 8, 16 … after rp, rprp,rprprp … 662 returns to a maximum basis of repetitions and attempts to reduce to keep the number of 627 messages down.

The basis and the use of 612 might lead to a 662 if the decoder is not in synchronization with respect to the basis of repeats. This basis is ignored on the higher-level code and is just a summation of noise to increase S/N by the symbol repetition.

606 sy – synchronous idle 
612 rp – repetition of x[rp]x or x[rp]x[rp]xx (7)
624 ra – rep acknowledge all reps in RX in TX
627 re – rep acknowledge with err correct as 624
631 ri – rep basis increase request (2*)
632 rd – rep basis decrease request (2/)
654 ok – accept basis repetition count by request
662 un – unsync of repetition error reply (max)
664 cq – followed by callsign and sy termination

This allows for a variable data distance at a constant rate especially if the RX has a sampling of code expectation and averaging over the number of symbol reps. It also synchronizes the start of many DCS codes but would reduce the speed of lock to need the code aligned.

Extended codes could be used to extend the coding to include other things. This is not necessary, and 83 symbols are enough. This is a good start, and extras are fine though. Even precise datarate coding lock would give better performance over DX at high repetition basis.

A modified form of base64 encoding along with digital signatures (El Gamel?) could provide good binary 8-bit transmission, and block reception good certainty. A return of the good signature or the false signature on error makes for a good block retransmit given a simplex window size of 1. In this case, synchronous idle would be a suitable preamble, and the 2xx and 6xx codes would be ignored as part of the base64-esque stream (except 606 for filling in empty places in the blocks of 5 in the base64 code).

COVID-19

Business has not really been affected, and so I am still available for work, and sitting in isolation thinking about the things I want to set a direction.

It’s a nasty infection. Take care.

9+4+1+1+3*4=27 and a 9th Gluon for 26 Not

It still comes to mind that the “Tits Group gluon” might be a real thing, as although there seem to be eight, the ninth one is in the symmetry of self attraction, perhaps causing a shift in the physical inertia from a predicted instead of filled in constant of nature.

There would appear to be only two types of self dual coloured gluons needed in the strong nuclear force. As though the cube roots of unity were entering into the complex analysis that is within the equations of the universe.

9+4+1+1+3*4=27

The 3*4 is the fermionic 12 while the relativistic observational deviation from the abstract conceptual observation frame versus the actual moving observation particle provides for the cyclotomic 9+4+1+1 = 15 one of which is not existential within itself but just kind of a sub factor of one of the other essentials. Also it does point out a 3*5 that may also be somewhere.

Given the Tits Gluon, the number of bosons would be 14, which removing 8 for gluons, leaves 6, and removing 4 for the electrowek boson set would leave 2, and removing the Higgs, would leave 1 boson left to discover for that amount of complexity in the bosonic cyclotomic groups.

The fantastic implications of the 26 group of particles and the undelying fundementals which lead to strong complex rooted pairs, and leptonic pair set separation. Well, that’s another future.

Roll on the Plankon as good a name as any. The extension of any GUT beyond it would either be some higher bosonic cyclotomy or a higher order effect of fermions leading to deviation from Heisenburg uncertainty.

Up Charm Top
Down Strange Bottom
Electron Muon Tau
E Neutrino M Neutrino T Neutrino
H Photon W+
? Z0 W
Gluon Gluon Gluon
Gluon Tits Gluon Gluon
Gluon Gluon Gluon

Dimensions of Manifolds

The Lorentz manifold is 7 dimensional with 3 space like 1 time like and 3 velocity like, while the other connected manifold is 2 space like 1 time like, 2 velocity like and a dimensionless “unitless” dimension. So the 6 dimensional “charge” manifold has properties of perhaps 2 toroids and 2 closed path lines in a topological product.

Metres to the 4th power per second. Rate of change of a 4D spacial object perhaps. The Lorentz manifold having a similar metres to the 6th power per second squared measure of dimensional product. Or area per kilogram and area per kilogram squared respectively. This links well with the idea of an invariant gravitational constant for a dimensionless “force” measure, and a mass “charge” in the non Lorentz manifold of root kilogram.

Root seconds per metre? Would this  be the Uncertain Geometry secondary “quantum mass per length field” and the “relativistic invariant Newtonian mass per length field”. To put it another way the constant G maps the kg squared per unit area into a force, but the dimensionless quantity (not in units of force) becomes a projector through the dimensionless to force map.

GF*GD = G and only GF is responsible for mapping to units of force with relativistic corrections. GD maps to a dimensionless quantity and hence would be invariant. In the non Lorentz manifold the GMM/r^2 eqivalent would have in units of root kilogram ((root seconds) per metre), and GD would have different units too. Another option is for M to be quantized and of the form GM/r^2 as both the “charge” masses could be the same quantized quantity.

The reason the second way is more inconsistent with the the use of the product of field energies as the linear projection of force would give an M^2 over an r^2, and it would remove some logical mappings or symmetries. In terms of moment of inertia thinking, GMM/Mr^2 springs to mind, but has little form beyond an extra idea to test out the maths with.

W Baryogenisis Asymmetrical Charge

The split of W plus and minus into separate particle slots takes the idea that the charge mass asymmetry between electrons and protons can come from a tiny mass half life asymmetry. Charge cancellation of antiparticle WW pairs may still hold but momentum cancellation does not have to be exact, leading to a net dielectric momentum. Who knows an experiment to test this? A slight induced photon to Z imbalance on the charge gradient, with a neutrino emission. The cause of the W plus to minus mass ratio being a consequence of the sporadic group orders and some twist in very taught space versus some not as taught space or dimensionless expression of a symmetrically broken balance of exacts.

The observation of a dimensionless “unitless*” dimension being invariant to spacetime and mass density dilation. My brain is doing a little parallel axis theorem on the side, and saying 3D conservation of energy as an emerging construction with torsion being a dialative observable in taught spacetime.

Recent experiment of inertia of spin in neutrons provides a wave induction mechanism. Amplified remote observation of non EM radio maybe possible. Lenz’s law of counter EM cancellation may not apply. It is interesting. Mass aperture flux density per bit might be ok depending on S/N ratio. That reminds me of nV/root Hz. So root seconds is per root Hz, and nV or scaled Volts is Joules per mol charge, Z integer scale */ Joules, or just Joules or in Uncertain Geometry house units Hz. So Hz per root Hz, or just root Hz (per mol).

So root seconds per metre is per root Hz metre. As the “kilogram equivalent but for a kind of hypercharge” in the non Lorentz manifold perhaps. The equivalent of GD (HD) projecting the invariant to an actual force. By moving the dialative into GF and HF use can be made of invariant analysis. Mols per root Hz metre is also a possible QH in FHI = HDQHQH/R^2 the manifold disconnect being of a radius calculated norm in nature. A “charge” in per noise energy metre?

Beyond the Particles to the 18n of Space with a Tits Connection

Why I lad, it’s sure been a beginning. The 26 sporadic groups and the Tits group as a connection to the 18n infinite families of simple groups. What is the impedance of free space (Google), and does water become an increase or decrease on that number of ohms. Inside the nature of the speed of light at a refractory boundary, what shape is the bend of a deflection and what ohm expectations on the impedance to the progress of light?

Boltzmann Statistics in the Conduction of Noise Energy as Dark Energy

Just like ohm metres is a resistivity of the medium, it’s inverse being a conductivity in the medium, a united quantity relating to “noise energy or intensity” with a metres extra maybe an area over length transform of a bulk property of a thing. The idea a “charge” can be a bulk noise conductivity makes for an interesting question or two. Is entanglement conducted? Can qubits be shielded? Can noise be removed from a volume of space?

If noise pushes on spacetime is it dark energy? Is the Tits gluon connection an axion with extras conducting into the spacetime field at a particular cycle size of the double cover of the from 18n singular group which shall be known as the flux origin. 2F4(2)′, maybe the biggest communication opportunity this side of the supermassive black hole Sagittarius A*. 

The Outer Manifold Multiverse Time Advantage Hypothesis

Assuming conductivity, and locations of the dimensionally reduced holographic manifold, plus time relativistic dilation, what is the speed of light to entanglement conduction ratio possibilities?

As noise from entanglement comes from everywhere, then any noise directionality control implies focus and control of noisy amounts from differential noise shaped sources. Information is therefore not in the bit state, but in the error spectrums of the bits.

The inner (or Lorentz) manifold is inside the horizon, and maybe the holographic principal is in error in that both manifolds project onto each other, and what is inside a growing black remains inside, and when growth happens does the outer manifold completely get pushed further out?

A note on dimensionfull invariants such as velocities is that although they are invariant they become susceptible to environmental density manipulation where as dimensionless invariants are truly invariant in that there is no metre or second that will ever alter the scalar value. For example Planck’s constant is dimensionless in Uncertain Geometry house units.

So even though the decode may take a while due to the distance of the environmental entanglement and its influence on statistics, (is it a radius or radius square effect), the isolation of transmission via a vacuum could in principal be detected. Is there a relationship between distance, time of decode for relevance of data causality?

If the spectrum of the “noise” is detectable then it must have properties different from other environmental noise, such as being the answer to a non binary question and hopefully degenerative pressure eventually forces the projection of the counter solutions in the noise, allowing detection by statistical absence.

Of course you could see it as a way of the sender just knowing what had not been received, from basic entanglement ideas, and you might be right. As the speed of temperature conduction is limited by the speed of light and non “cool packed” atomic orbital occupancy as in the bulk controlled by photon exchange and not degenerative limits imposed by Pauli exclusion. A quantum qubit system not under vacuum of cooling does not produce the right answer, but does it statistically very slightly produce it more often? Is the calculation drive of the gating applying a calculative pressure to the system of qubits, such that other qubits under a different calculation pressure can either unite or compete for the honour of the results?

Quantum noise plus thermal noise equals noise? 1/f? Shott noise for example is due to carrier conduction in PN junction semiconductors, in some instances. It could be considered a kind of particle observation by the current in the junction which gets (or can be) amplified. I’m not sure if it is independent of temperature in a limited (non plasma like) range, but it is not thermal noise.

The Lambda Outer Manifold Energy in a More General Relativity

The (inner of the horizon) manifold described by GR has a cosmological constant option associated with it. This could be filled by the “gravitation of quantum noise conduction” symmetrical outer manifold isomorphic field with a multiplicative force (dark energy?) Such that the total when viewed in an invariant force measure picture is not complicated by the horizon singularities of the infinities from division by zero. Most notably the Lorentz contraction of the outer manifold as it passed through the horizon on expansion or contraction of the radius.

The radius itself not being invariant can not be cast to other observers to make sense, only calculated invariants (and I’d go as far to say dimensionless invariants) have the required properties to be shared (or just agreed) between observers without questions of relativistic reshaping. Communication does not have to happen to agree on this knowledge of the entangled dimensionless measure.

CMB Focus History

With the CMB assume a temperature bends due to density and distance from a pixel as a back step in time then becomes a new picture with its own fluctuations in density and hence bend to sum an action on a pixel for a earlier accumulation over pixels drifting to a bent velocity. Motion in the direction of heat moved further back in time. Anything good show up? Does the moment weight of other things beside an inverse square bend look a little different?

So as the transparency emission happened over a time interval, the mass should allow a kind of focus back until the opacity happens. Then that is not so much as a problem as it appears or not, as it is a fractional absorbtion ratio, and the transparency balance passed or crosses through zero on an extrapolation of the expectation of continuation.

Then there maybe further crossings back as the down conversion of the red shift converts ulta gamma into the microwave band and lower. The fact the the IF stage of the CMB reciever has a frequency response curve and that a redshift function maybe defined by a function in variables might make for an interesting application of an end point integral as the swapping of a series in dx (Simpson’s rule) becomes a series in differentials of the function but with an exponetial kind of weighting better applicable to series acceleration.

Looking back via a kind of differential calculus induction of function, right back, and back. The size of the observation appature will greatly assist, as would effective interpolation in the size of the image with some knowledge of general relativity and 3D distance of the source of the CMB.

To the Manifold and Beyond

Always fun to end with a few jokes so the one about messing with your experiment from here in multiple ways, and taking one way home and not telling you if I switched it off seems a good one. There are likely more, but today has much thought in it, and there is quite a lot I can’t do. I can only suggest CERN keep the W+ and W- events in different buckets on the “half spin anti-matter opposite charge symmetry, full spin boson anti-matter same charge symmetry as could just be any” and “I wonder if the aliens in the outer universe drew a god on the outside of the black hole just for giggles.”

 

Differential Modulation So Far

Consider the mapping x(t+1) = k.x(t).(1-x(t)) made famous in chaos mathematics. Given a suitable set of values of k for each of the symbols to be represented on the stream, preferably of a size which produces a chaotic sequence. The sequence can be map stretched to encompass the transmission range of the signal swing.

Knowing that the initial state is represented with an exact precision, and that all calculations are performed using deterministic arithmetic with rounding, then it becomes obvious that for a given transmit precision, it becomes possible to recover some pre-reception transmission by infering the preceeding chaotic sequence.

The calculation involved for maximal likelyhood would be involved and extensive to obtain a “lock”, but after lock the calculation overhead would go down, and just assist in a form of error correction. In terms of noise immunity this would be a reasonable modulation as the past estimation would become more accurate given reception time and higher knowledge of the sequence and its meaning and scope of sense in decode.

Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.