Looks like the Net needs a reflective NAK uptake NAK shutdown.

And so some arbitary bollox began. Take self query and accept delay of self accessing you to provide NAk nothing or shutdown as your proto want DoS attck. End, happy music continues. If bought net percent, then for purpose?

Seems 17/9/2021 has a dislike of digital flux to elsewhere. It’s surprising how a 10% NAK request packet NAK happened before could force upon the uptake of the promotion of the internet vesrus those which refuse cache and just want DoS attack without in a sense showing provido.

Why do cells recycle approximately 10% of themselves toward an imune system which kills the not of body? Why does the engineering of telecoms having a standing 10% use of bandwidth to achive effective 90% use goals?

It’s not my number, just the IP/ISP stack. It could just be the sound of Musk and the super-titty highway, but not likely?

Templigeadicalogical Algebra

Well, where shall I start? All could see the sums were good, of those taught sums a protected right of the conversationally on. A heavy reign introduction to subtraction and hence divisional and rights of subdivision consensus elective protection from decimation.

Command hierarchies of the on for example a battle unit knowing it is one that is on, set to fight for subfeariors of command wants subjunct summative transmissive networks of optimization feedback of induction of sums?

The out wave of a famine “un” the handy past the time of fight show lean on the order of meditative? As the collective induced cook as woman work to collect the connection of hand fights to womb growth multipliers.

This might be fun at the food policy unit 😀

Fight coming up from blame of “monk” to obivatiate doubt on family protection, and thrust occasional mindsets to perimeters of risk reduction from onslaught foresightful of the time to tummy from mummy. As the di summand moved predecimand, the focus on god cycle before analytic deconstruction by sumandment of temple duty, became moot as the knowledge collapse in chaotic cycles not brought into feedback bode stability as the control hierarchy became argumentative dominant.

Bun fight!!!! Let then ate cake?? I’m ‘avin a go at integrands, might PID control feedback stuff if the boding camlmand hireachies summand with better cross information flow? Sumands like ? The first L of simon. Technically though it is a second order tension of l fighty fiesty hg an onset of bode instability so yicked up set delguage from a fWell, via a dissociable epigenetic panic.

Prove me hyper politely wrong on the abuse that extends from the fear of critique. I’m on, some of the nicest people I know are women. There on, but maybe not on on in the wordfield. The uncertainty potential of action in fold downs of understanding? I can say being a man I understand testosterone. The idiomatic fork as well extends from this therefore the competition between communicative and full active fully automatic via lack of information has its inductive effect.

Anti June could swear on summand a sailor! Error analysis in.  Idiomatic jokes are always a shitter. Control yourshelbves bint dat ladies. First orders, second orders power orders, summands and seais so ship? Lndend? Trade …

Accents for the poor? Accination programs? The enlightenment of the orbifold tonces. The dictum freeze from Oxford, an experiment in analytical management by saturnalian net. A distribute of multi-lingo automation?  Distribute, estimate, summand, perform error control minimization. Unlimnate uncertainty of position.

Free42 Android App Longer Term

A very nice calculator app. I’ll continue to use it. What would I change? And would I change what I’d changed? A fork with extras began and is in development.

  • I’d have a SAVE and LOAD with load varieties (LOADY, LOADZ, LOADT for register and all stack registers higher if all 4 stack items are not to be restored along with LASTX) depending on restoring the right stack pattern after a behaviour which makes for first-class user-defined functions. SAVE? would return how many levels of saving there are.
  • Perhaps variables based on the current program location (or section). A better way of reducing clutter than a tree, while accessing the tree would need a new command specifying the variable context. This would lead to a minimal CONTEXT to set the LBL style recall context and use the THIS to set this context as per usual but without the variable in context clutter. A simple default to change the context when changing program space ensures consistency of being. In fact, nested subroutines could also provide a search order for an outer context. THAT could just remove one layer of the context, or more precisely change the current to the one below on the call stack such that THAT THAT would get the second nesting context if it exists. LSTO helps a little.
  • Some mechanics for the execution of a series term generator which by virtue of a modified XEQG (execute generator), could provide some faster summation or perhaps by flags a product, a sum, a term or continued fraction precision series acceleration.
  • Differential (numeric) and integral (endpoint numeric multiple kinds and all with one implicit bound of zero for constant at zero) algorithms that I would not reimplement them 😀 as I would like a series representation by perhaps an auto-generated generator. So XEQG would have a few cousins.
  • Although Mathematica solving might not give %n inserts for parameterizing a solution for constants, this does not prevent XEQG doing a differential either side sampling at high order and reducing it geometrically for a series estimation of the exact value. In terms of integral an integral of x^n.f(x) where n goes to zero provides the first bit of insight into integrals as convergent sets of series, with an exclusion NonconvergentAreaComplex[] on Godelian (made to make a method of solve fail) differential equations (or parts thereof). Checking the convergents of the term supplied to XEQG and cousins allows for sensible errors and perhaps transforms to pre-operators on the term provider function. SeriesRanged[] (containing an action as a function) list of for the other parts, with correct evaluation based on value, and how does this go multivariate? Although this looks out of place, it relates to series solutions of differential equations with more complex forms based on series of differentials. The integral of x.f(x)/x by parts as another giver of two more generators. The best bit is the “integral” from such a form is just evaluated at one endpoint (maybe subtraction for definite integrals) and as they include weighted series can be evaluated often by the series acceleration of a small number of differentials of the function to be integrated. The differentials themselves can be evaluated often accurately as a series converging as the delta is geometrically reduced with the improvements in the estimates being considered as new smaller terms in the series. So an integral evaluation might come down to (at 9 series terms per acceleration) about 2*90 function invocations instead of depending on the Simpson’s rule which has no series weighting to “accelerate” the summation. Also, integration up to infinity might be a simpler process when the limits are separated into two endpoint integrals as the summation over a limit to an estimation of convergence at infinity would not need as many conditional test cases on none, both and either one. As I think integrals should always return a function with parametric implicit constants, should not differentials return a parameterized function by default boolean the possibility of retrieving the faded constants? An offsetable self-recovery of diminished offset generic? SeriesRanged[Executive[]][ … ] 
  • Free42 Android
  • Perhaps an ACCESS command for building new generators (with a need to get a single generated) with a SETG (to set the generator evaluating ACCESS) and  XEQG can become just a set of things to put in SETG “…” making for easy generators of convergents and other structures. GETG for saving a small text string for nesting functions might be good but not essential and might confuse things by indirection possibilities. Just having a fixed literal alpha string to a SETG is enough as it could recall ACCESS operators on the menu like MVAR special programs (and not like INPUT programs). XEQG should still exist as there is the SETG combiner part (reducer) as well as the individual term generator (mapper) XEQG used for a variety of functions. This would make for easier operator definition (such as series functions by series accelerations or convergent limit differentials by similar on the reduction of the delta) without indirect alpha register calling of iterates.
  • A feature to make global labels go into a single menu item (the first) if they are in the same program, which then expands to all in the current program when selected for code management.
  • +R for addition with residual returning that fraction of the X that was not added to Y being returned in the X register and the sum returned in Y. This would further increase precision in some algorithms.

Rationale (after more thought and optimization)

  • Restoring the stack is good for not having to remember what was there and if you need to store it. Requires a call stack frame connection so maybe SAVE? is just call stack depth and so not required. (4 functions). LOAD, SAVE with some placing old loaded X into the last X with two commands before LOAD is called USE to indicate a stack consumption effect after restore and MAKE to leave one stack entry next lowest as an output.
  • Although local variables are good, in context variables would be nice to see. Clutter from other contexts is avoided or at least placed more keystrokes away from the main variables. This would also be easier to connect to the call stack frame. (3 functions) CONTXT, THIS and THAT. RCL tries CONTEXT before the call stack program associated variables. No code spams variables into other namespaces. STO stores into its associated variable space. This ensures an import strategy. The .END. namespace can be considered an initial global space so the persistence of its content upon GOTO . . is useful so XEQ “.END.” should always be available.
  • INTEG and SOLVE could be considered operators, but with special variables.  Separation of the loop to reduce on from the map function makes more general summation functions possible given single term functions. It would be more general to have 3 commands so that the reducer, the mapper and the variable to map could be all set, but is that level necessary? Especially since in use, a common practice of setting the reducer and applying it to different maps seems more useful. But consistency and flexibility might have PGMRED, PGMMAP and MAPRED “var” for generality in one variable, with ACCESS in the reducer setting the right variable before executing the mapping. (4 functions).
  • Addition residual is a common precision technique. (1 function) +R.
  • I’d also make SOLVE and INTEG re-entrant (although not necessarily to a nested function call (a function already used in call stack frames stack check?)) by copying salient data on process entry along with MAPRED where the PGMRED set function can be used again and so does not need a nested reused check.
  • As to improvements in SOLVE, it seems that detection of asymptotes and singularities confuses interval bisection. Maybe adding a small amount and subtracting a small amount move actual roots but leave singular poles alone swamped by infinity. Also, the sum series of the product of the values and/or gradients may or may not converge as the pole or zero is approached.
  • Don’t SAVE registers or flags as this is legacy stuff. Maybe a quadratic (mass centroid) regression, Poisson distribution and maybe a few others, as the solver could work out inverses. Although there is the inconsistency of stack output versus variable output. Some way of auto-filling in MVAR from the stack and returns for 8 (or maybe 6 (XYZT in and X subtracted out, and …)) “variables” on the SOLVS menu? Maybe inverses are better functionality but the genericity of solvers are better for any evaluation. Allow MVAR ST X etc, with a phantom SAVE and have MRTN for an expected output variable before the subtraction making another “synthetic” MVAR or an exit point when not solving (and solving with an implicit – RTN and definite integrals being a predefinition of a process before a split by a subtractive equation for solving)? It would, of course, need MVAR LAST X to maybe be impossible (a reasonable constraint of an error speed efficiency certainty). (5+1 menu size). Redefinition of many internal functions (via no MVAR and automatic solver pre and postamble) would allow immediate inverse solves with no programming (SOLVE ST X, etc., with no special SOLVE RTN as it’s a plain evaluation). This makes MRTN the only added command, and the extra ST modes on the SOLVE and also a way of function specification for inbuilt ones.  The output to solve for can be programmatically set as the x register value when PGMSLV is executed and remembered when SOLVE is used next.
  • Register 24 is lonely. Perhaps it should contain weighted n, Σy but no it already exists. Σx2y seems better for the calculation of the weighted variance. That would lead to registers 0 to 10 being fast scratch saves. The 42 nukes other registers in ALLΣ anyway and I’d think not many programs use register 24 instead of a named variable. I’d be happy about only calculating it when in all mode, as I never switch and people who do usually want to keep register compatibility of routines for HP-41 code. Maybe PVAR for the n/(n-1) population variance transforms although this is an easy function to write by the user. A good metric to measure what gets added. Except for +R which is just looping and temporary variables for residual accumulation with further things to add assuming the LAST Y would be available etc.
  • I’d even suggest a QΣ mode using all the registers 0 to 10 for extra statistical variables and a few of those reserved flags (flag 64). I think there is at least 1 situation (chemistry) where quadratic regression is a good high precision idea. This makes REGS saving a good way of storing a stats set. Making the registers count down from the stats base in this mode seems a good idea. The following would provide quadratic regression with lin, log, exp and pow relation mapping on top of it for a CFIT set of 8 along with the use of R24 above. An extra entry on the CFIT MODL menu with indicator QΣ for that enablement toggle of the extra shaping and register usage (flag 64 set) with an automatic enable of ALLΣ. As the parabolic constant would not be often accessed it would be enough to store it and the other ones after a fit, not interfering with live recalculation so as to not error by assumption. It would, of course, change the registers CLΣ sets to zero. Flag 54 can perhaps store the quadratic fitting model in QΣ mode. Quadratic Regression details. Although providing enough information to manufacture a result for the weighted standard deviation, it becomes optimal to decide to add WSD or an XY interchange mode on a flag to get inverse quadratic regression. Which would provide 12 regression curve options. The latter would need to extend the REGS array. FCSTQ might be better as a primary command to obtain the forecast root when the discriminant is square root subtracted negative as two forecast roots would exist. The most positive one would likely be more real in many situations. Maybe the linear correlation coefficient says something about the root to use and FCSTQ should use the other one?
    • R0 = correlation coefficient
    • R1 = quadratic/parabolic constant
    • R2 = linear constant
    • R3 = intercept constant
    • R4  = Σx3
    • R5 = Σx4
    • R6 = Σ(ln x)3
    • R7 = Σ(ln x)4
    • R8 = Σ(ln x)2y
    • R9 = Σx2ln y
    • R10 = Σ(ln x)2ln y
  • Flags still being about on the HP-28S was unexpected for me. I suppose it makes me not want to use them. The general user flags of the HP-41 have broken compatibility anyway as 11 to 18 are system flags on the HP-42S. There would be flags 67, 78, 79 and 80 for further system allocations.
  • I haven’t look if the source for the execution engine has a literal to address resolver with association struct field for speed with indirect handled by a similar manner, maybe even down to address function pointer filling in of checks and error routines like in a virtual dispatch table.
  • If endpoint integrals provide wrong answers, then even the investigation into the patterns of deviation from the true grail summate to eventually make them right in time. A VirtualTimeOptimalIngelCover[] is a very abstract class for me today. Some people might say it’s only an analytical partial solution to the problem. DivergantCover[] as a subclass of IngelCover[] which itself is a list container class of the type IngelCover. Not quite a set as removing an expansive intersection requires an addition of a DivergentCover[]. It’s also a thing about series summation order commutativity for a possible fourth endpoint operator.
  • MultiwayTimeOptimizer[ReducerExecutive[]][IngelCover[MapExecutive[]][]] and ListMapExecutiveToReturnType[] and the idea of method use object casting. And an Ingel of classes replaced the set of all classes.
  • I don’t use printing in that way. There’s an intermediate adapter called a PC tablet mix. The HP-41 was a system. A mini old mainframe. A convenience power efficiency method. My brother’s old CASIO with just P1 and P2 was my first access to a computational device. I’m not sure the reset kind of goto was Turing complete in some not enough memory for predicate register branch inlining.
  • ISO 7 Layer to 8 Layer, insert at level 4, virtualized channel layer. Provides data transform between transmit optimally and compute optimally. Is this the DataTransport layer? Ingel[AutomaticExecutive[]][].
    1. Paper
    2. (Media Codec)
    3. Symbols
    4. (Rate Codec)
    5. Envelope
    6. (Ring Codec) 3, 2 …
    7. Post Office
    8. (Drone codec)
    9. Letter Box
    10. (Pizza codec)
    11. Name
    12. (Index codec)
    13. Dear
  • Adding IOT as a toggle (flag 67) command in the PRINT menu is the closest place to IO on the Free42. Setting the print upload to a kind of object entity server. Scheduling compute racks with the interface problem of busy until state return. A command CFUN executes the cloud functions which have been “printed”. Cloud sync involves keeping the “printed” list and presenting it as an options menu in the style of CATALOG for all clouded things. NORM (auto-update publish (plus backup if accepted), merge remote (no global .END.)) and MAN (manual publish, no loading) set the sync mode of published things, while TRACE (manual publish, merge remote plus logging profile) takes debug logs on the server when CFUN is used but not for local runs. Merge works by namespace collision of local code priority, and no need to import remote callers of named function space. LIST sets a bookmark on the server.
  • An auto QPI mode for both x and y. In the DISP menu. Flag mode on in register 67. Could be handy. As could a complex statistics option when the REGS array is made complex. It would be interesting to see options for complex regression. As a neural node functor, a regression is suitable for propagation adaptation via Σ+ and Σ- as an experiment into regression fit minimization.

Snoooozzzzzzzze

Sounds like a plan. Seems some playing with Caustic 3 android app is also needed. The tiredness which is not tiredness, more of a motor function initialisation deficit. Maybe I’ll get inspired on how to do things today. Maybe some beer. The knot keeps of South American Indians and amino acid chains? I think today has a parallels feel to it.

I suppose a visit to the shops would also generate things to go with this chicken. Which might be nice later. Does it have electrolytes? Maybe some more biochemistry videos and the facinating origins of how tryptophan came to be coded by a stop codon, and speculation on the recruitment of an extra essential amino acid.

ES-64 Architecture (Open Hardware)

I’ve been looking into a native 64 bit architecture design of late ES-64 for the future of 64 as a default. The boot into 32 and 16 is history but frequently done. Trying to flip some design ideas on the head is one thing, but a central build repository is so easy these days. Easier than the VHDL. The aim is eventual code, but at the moment it’s a spreadsheet in PDF format, and an allocation space. Enjoy if you want to sell your hairdryer to the zero share landfill or paid recycling dedopter point.

The initial instruction set looks good for general coding and I decided to at the outset make a large number of opcodes be no operation, giving a certain way to expand to 32 and 48 bit opcodes. It’s inspired by the 68k but has a more RISCy feel. Most addressing modes were sacrificed to allow general operations on Word, Long and Quad as well as Float and Double. Bytes were not considered much apart from some Unicode helper instructions. The machine is word addressed.

A large part of the opcode space was opened up by sensible ideas about stability of certain operations on the PC. So a 20 register machine results with a lot of free opcode space, and a lot of reserved prefixes for things like vectors. A software model for simulation is likely before any VHDL.

The main focus on code density to open data cache bandwidth means some aspects of RISC have to be ignored. A memory to memory model is used instead of a load store model. This can be more dense for things like one off data loads, as the load and indirect are done in the same instruction without the extra bits in code. Quick literals are limited to 5 bits and come with a built in operation. This reduces register requirements and with general width operations 64 bit registers can easily split into 2 times 32 bit register halves or 4 times 16 bit register quarters. Most code will fit well, perhaps as 16 bit threaded code with a few virtual memory pages multimapped for common subroutines and a springboard for 32 and 64 bit subroutines.

The code generator might be more complex with bucket 64k assignment and routine factorization, but that is a task a machine can do well. There are reasonably efficient methods of code factoring to reduce binary size.

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

Windows Being Shit Again

Ever needed to move “Program Files” to make some space with an easy 32 GB SD? Obviously, Microsoft keeps getting a backhand bribe for filling up internal drives. It turns out a nice utility called Steam Mover does the job of making a shortcut quite nicely. Windows 10 should have allowed this, and so should Windows 7. This is especially bad form when the Windows 10 upgrade installer will not use the SD for gaining the space to do an install. Where did all the GB go? On Visual Studio 2017. A bloaty C compiler.

Even LibreOffice 5 is deleting and doing failed installs. So much for being free. Part of the NSA always on spyware forced upon peoples electric processing bills. Another few GB of keywords to search through, all parasitized off you, for the nationally secure, and stuff you.

So after you check out a free demo rental of your supposed outright buy, and then you’ll have to change to another, as the megabytes increase to do much the same but slower, on faster hardware.

Yep, move a set of bits to some other volume, and all hell breaks loose. It does make you wonder why a specific Office 365 piece of code is running with a file lock when no office documents are open or used.

The 3D Flavour Tensor in Analogue to the 4D of Einstein, for a 3D, 4D Curvature in Particle Physics

I like to keep updated about particle physics and LHC things, to quite an advanced level. My interest is in fields and their previous engineering value in radio waves and electronics in general. It makes sense to move to a tensor algebra in the 2+1 charge space, just as was done for the theory of gravitation. In some sense the conservation of acceleration becomes a conservation of net mapped curvature and it becomes funny via Noether’s Theorem.

CP violation as a horizon delta of radius of curvature from the “t” distance is perhaps relevant phrased as a moment of inertia in the 2+1, and its resultant geometric singular forms. This does create the idea of singular forms in the 2+1 space orbiting (or perhaps more correctly resonating) in tune with singularities in the 3+1 space. This interconnection entanglement, or something similar is perhaps connected to the “weak phase”.

So a 7D total space-time, with differing invariants in the 3D and 4D parts. The interesting thing from my prospective is the prediction of a heavy graviton, and conservation of acceleration. The idea that space itself holds its own shape without graviton interaction, and so conserves acceleration, while the heavy graviton can be a short range force which changes the curvature. The graviton then becomes a mediator of jerk and not acceleration. The graviton, being heavy would also travel slower than light. Gravity waves would then not necessarily need graviton exchange.

Quantization of theories has I think in many ways gone too far. I think the big breaks of the 21st century will be turning quantized bulk statistics into unquantized statistics, with quantization applied to only some aspects of theories. The implication is that dark matter is bent spacetime, without matter being present to emit gravitons. In this sense I predict it is not particulate.

So 7D and a differential phase space coordinate for each D (except time) gives a 13D reality. The following is an interesting equation I arrived at at one point for velocity solutions to uncertainty. I did not incorporate electromagnetism, but it’s interesting in the number of solutions, or superposition of velocity states as it were. The w being constant in the assumption, but purtubative expansion in it may be interesting. The units of the equation are conveniently force. A particle observing another particle would also be moving such, and the non linear summation for the lab rest frame of explanation might be quite interesting.

(v^2) v ‘ ‘ ‘−9v v ‘ v ‘ ‘+12(v ‘ ^3)+(1−v^2/c^2)v ‘ (wv)^2=0

With ‘ representing differential w.r.t. time notation. So v’ is acceleration and v” is the jerk. I think v”’ is called the jounce for those with a mind to learn all the Js. An interesting equation considering the whole concept of uncertain geometry started from an observation that relative mass was kind of an invariant, mass oscillation, although weird with RMS mass and RMS energy conservation, was perhaps a good way of parameterizing an uncertainty “force” proportional to the kinetic energy momentum product. As an addition it was more commutative as a tensor algebra. Some other work I calculated suggests dark energy is conservation of mass times log of normalized velocity, and dark matter could be conserved acceleration with gravity and the graviton operating to not bend space on density, but bend space through a short distance acting heavy graviton. Changes in gravity could thus travel slower than light, and an integral with a partial fourth power fraction could expand into conserved acceleration, energy, momentum and mass information velocity (dark energy) with perhaps another form of Higgs, and an uncertainty boson (spin 1) as well.

So really a 13D geometry. Each velocity state in the above mass independent free space equation above is an indication of a particle of differing mass. A particle count based on solutions. 6 quarks and all. An actual explanation for the three flavours of matter? So assuming an approximate linear superposable solution with 3 constants of integration, this gives 6 parameterized solutions from the first term via 3 constants and the square being rooted, The second tern involves just 2 of the constants for 2 possible offsets, and the third term involves just one of the constants, but 3 roots with two being in complex conjugation. The final term involves just one of the constants, but an approximation to the fourth power for 4 roots, and disappearing when the velocity is the speed of light, and so is likely a rest mass term.

So that would likely be a fermion list. A boson list would be in the boundaries at the discontinuities between those solutions, with the effective mass of the boson controlled by the expected life time between the states, and the state energy mismatch. Also of importance is how the equation translates to 4D, 3D spacetime, and the normalized rotational invariants of EM and other things. Angular momentum is conserved and constant (dimensionless in uncertain geometry),

Assuming the first 3 terms are very small compared to the last term, and v is not the speed of light. There would have to be some imaginary component to velocity, and this imaginary would be one of the degrees of freedom (leading to a total of 26). Is this imaginary velocity consistent with isospin?

Yang–Mills Existence and Mass Gap (Clay Problem)

If mass oscillation is proved to exist, then the mass gap can never be proved to be greater than zero as the mass must pass through zero for oscillation. This does exclude the possibility of complex mass oscillation, but this is just mass shrinkage (no eventual gap in the infinite time limit), or mass growth, and hence no minimum except in the big bang.

The 24 degrees of freedom on the relativistic compacted holographic 3D for the 26D string model, imply with elliptic functions, a 44 fold way. This is a decomposition into 26 sporadic elliptic patterns, and 18 generational spectra patterns. With the differential equation above providing 6*2*(2+1) combinations from the first three terms, and the 3 constants of integration locating in “colour space” through a different orthogonal basis. Would provide 24 apparent solution types, with 12 of them having a complex conjugation relation as a pair for 36. If this is the isospin solution, then the 12 fermionic solutions have all been found. That leaves the 12 bosonic solutions (the ones without a conjugate in the 3rd term generative), with only 5 (or if a photon is special 4) having been found so far. If the bosonic sector includes the dual rooting via the second term for spin polarity, then of the six (with the dual degenerates cancelled), two more are left to be found if light is special in the 4th term.

This would also leave 8 of the 44 way in a non existent capacity. I’d maybe focus on them being gluons, and consider the third still to be found as a second form of Higgs. OK.

Displacement Currents in Colour Space

Maybe an interesting wave induction effect is possible. I’m not sure what the transmitter should be made of. The ABC modulation may make it a bit “alternate” near the field emission. So not caused by bosons in the regular sense, more the “transition bosons” between particle states. The specific transitions between energy states may (although it’s not certain), pull the local ABC field in a resonant or engineered direction. The actual ABC solution of this reality has to have some reasoning for being stable for long enough. This does not imply though that no other ABC solutions act in parallel, or are not obtainable via some engineering means.

Implementation of Digital Audio filters

An interesting experience. The choice of FIR or IIR is the most primary. As the filtering is modelling classic filters, the shorter coefficient varieties of IIR are the best choice for me. The fact of an infinite impulse response is not of concern with a continuous stream of data, and coefficient rounding is not really an issue when using doubles. IIR also has the advantage of an easy Sallen-Key implementation, due to the subtraction and re-adding of the feedback component, with a very simple CR processing.

The most interesting choices are to do with the anti-alias filtering, as the interpolation filter, on up-sampling is an easy choice. As the ear is not really responsive to phase, all the effort should be on the pass band response levels, and a good stop band non response. A Legendre or Butterworth are the candidates. The concept of a characteristic sound enters the design process at this point, as the cascading of SK filter sections is conceptually useful to improve the -6 dB response at cut off. This is a trade off of 20 kHz to 22.05 kHz in the alias pass band, and greater attenuation in the above 22.05 kHz infinite stop desire. The slight greater desire of alias attenuation above pass band maximal flatness (for audio harmony) implies the Legendre filter is better for the purpose than Butterworth.

In the end, the final choice is one of convenience. and a 9th order filter was decided upon, with 4 times oversampling. The use of 4 times oversampling instead of 8 times oversampling increases the alias by an octave reduction. This fact under the assumption of at least a linear reduction in the amplitude of the frequency of the generator of an alias frequency, with frequency increase, just requires a -12 dB extra gain reduction in the alias filter for an effective equivalence to 8 times oversampling (the up to and the reflection back down to 6 + 6). The amount of GHz processing also halves. These facts then become constructive in the design, with the bulk alias close to the cut off, and the minor reflected alias-alias limit, not being too relevant to overall alias inharmonic distortion.

A triple chain of 3 pole Legendre filter sections is the decided design. The approximate -9 dB at the corner, allows for slightly shifting up the cut off and still maintaining a very effective stop band. Code reuse also aids in the I-cache usage for CPU effective use.  A single 3 pole Legendre is the interpolation up sample filter. The roll off for not using Butterworth does cut some high frequency content from the maximally flat, hence the concept of maximally flat, but it out performs a Bessel filter in this regard. It’s not as though a phasor or flanger needs to operate almost perfectly in the alias band.

Perhaps there is improvement to be made in the up sampling filter, by post up sample 88.2 kHz noise shaped injection to eliminate all error at 44.1 kHz. This may have a potential advantage to map the alias noise into the low frequencies, instead of encroaching from the higher frequencies to the lower, and for creating the alias as a reduction in signal to noise, instead of at certain inharmonic peaks. The main issue with this is the 44.1 kHz wave fundamental, seen as the amplitude ring modulation of the injected phase noise, by the 44.1 kHz stepped waveform between samples input. The 88.2 kHz “carrier” and the sidebands are higher in frequency, and of the same amplitude magnitude.

But as this is following for no 44.1 kHz error, the 88.2 kHz and sidebands are the induced noise, the magnitude of which is of the order of 1 octave up from the -3 dB roll at the corner, plus approximately the octave for a 3 pole filter, or about 36 dB cut of a signal 3/4 of the input amplitude. I’d estimate about -37 dB at 88.1 kHz, and -19 dB at 44.1 kHz. Post processing with a 9 pole filter, provides an extra -54 dB on down sampling, for an estimate of around -73 dB or greater on the noise. That would be about 12 bit resolution at 44.1 kHz increasing with frequency. All estimates, likely errors, but in general not a good idea from first principals. Given that the 44.1 kHz content would be very small though post the interpolation filter, -73 dB down from this would be good, although I don’t think achievable in a sensible manor.

Using the last filtered sample in as the reference for the present sample filtered in as a base line, the signal at 22.05 kHz would be smoothed. It would have a notch filter effect, by injecting quantization offset ringing noise at 88.2 kHz to cancel 22.05 kHz. The notch would likely extend down in frequency for maybe -6 dB at about 11 kHz. Perhaps in the end it is just better to subtract the multiplied difference between two up sample filters using different sinc spreading of a 1000 and a 1100 sample occupancy zero inter fill. Subtracting the alternates up conversion delta as it were.

There is potentially also an argument for having a second order section with damping factor near 0.68 and corner 22.05 kHz to achieve some normalisation from sinc up-sampling. This adds in an amount of Q such as to peak the filter cancelling the sinc droop, which would be about 3% at 4 times oversampling.

EDIT: Some of you may have noticed that the required frequencies for stable filtering are too high at 4 times oversampling. So unfortunate for the CPU load an 8 times oversample has to be used. The sinc error is less than 1% at this oversample, but still corrected in a similar way, and a benefit of 2 extra poles. Following this by a 0.1 dB 3 pole Chebyshev high pass which has been inverted, gives a reasonable 5 pole up sampling filter. The down sampling filter for code efficiency is a triple instance of the sample inverse Chebyshev, with the corner frequencies slightly offset to produce more individual zeros, and some spreading of the “ringing”. These 9 poles are enough to get the stop band ripple to be lower than a 16 bit resolution. Odd order inverse Chebyshev are essential for the reflected spectra to be continually decreasing in amplitude.

Site Looks

I thought about making my site look like google today. The green here is distinctive, but maybe the site could learn from some google UI styling. But then maybe the net would all look the same. A bit of an off dream where the moon had all one style, and then each planet had sort of a brand. It’s more the arguments of the extents of confusion that would perhaps result from an eye catching UI, I still have not really formatted my landing page to my satisfaction. It’s an interesting thought that when the web of one planet becomes a wiki zone of another, the web becomes in effect a pivot table of UI elemental descriptions. UI design has been the defining feature of the web. Well yes, apps had it first, but there was a consistency of design in the super highway process, such that too much differentiation was counter productive in education hours.

Some have experimented with auto reflow of site HTML to modularize the CSS content, and so reduce the tweek time. A reflow portal based not just on the browser agent, but other stylistic factors could be a good future development strategy. I wouldn’t say it was a site priority here, but might be a good future project. This might be good for public spaces, where the seen that one, brings about little review.

SpliceArray in JavaScript

The current focus in the roitEmbed project is the SpliceArray class. The aim being a tree array structure, where each leaf does not have to be full. The cumulative counts of the leaves make for a simple binary search algorithm to find the correct leaf and element. Splicing involves removing and adding in elements. This makes the expensive linear copy of almost all elements on a large array splice into just insertion and deletion of smaller arrays, and so makes the splice not dependant on the number of array elements. The depth of the array tree needed, depends on the maximum size and is a trade off between differing efficiencies. An optimal size for each leaf node is about sixteen elements. This means the branch nodes have thirty two elements, as both a link and a cumulative count is required. A tree depth of six allows for a 16.7 million element array maximum. This is more than sufficient for any task I can imagine being done by browser side JS.