A very nice calculator app. I’ll continue to use it. What would I change? And would I change what I’d changed? A fork with extras began and is in development.

- I’d have a
**SAVE**and**LOAD**with load varieties (**LOADY**,**LOADZ**,**LOADT**for register and all stack registers higher if all 4 stack items are not to be restored along with**LASTX**) depending on restoring the right stack pattern after a behaviour which makes for first-class user-defined functions.**SAVE?**would return how many levels of saving there are. - Perhaps variables based on the current program location (or section). A better way of reducing clutter than a tree, while accessing the tree would need a new command specifying the variable context. This would lead to a minimal
**CONTEXT**to set the**LBL**style recall context and use the**THIS**to set this context as per usual but without the variable in context clutter. A simple default to change the context when changing program space ensures consistency of being. In fact, nested subroutines could also provide a search order for an outer context.**THAT**could just remove one layer of the context, or more precisely change the current to the one below on the call stack such that**THAT THAT**would get the second nesting context if it exists.**LSTO**helps a little. - Some mechanics for the execution of a series term generator which by virtue of a modified
**XEQG**(execute generator), could provide some faster summation or perhaps by flags a product, a sum,**a term**or continued fraction precision series acceleration. - Differential (numeric) and integral (endpoint numeric multiple kinds and all with one implicit bound of zero for constant at zero) algorithms that I would not reimplement them 😀 as I would like a series representation by perhaps an auto-generated generator. So
**XEQG**would have a few cousins. - Although Mathematica solving might not give %n inserts for parameterizing a solution for constants, this does not prevent
**XEQG**doing a differential either side sampling at high order and reducing it geometrically for a series estimation of the exact value. In terms of integral an integral of x^n.f(x) where n goes to zero provides the first bit of insight into integrals as convergent sets of series, with an exclusion**NonconvergentAreaComplex[]**on Godelian (made to make a method of solve fail) differential equations (or parts thereof). Checking the convergents of the term supplied to**XEQG**and cousins allows for sensible errors and perhaps transforms to pre-operators on the term provider function.**SeriesRanged[]**(containing an action as a function) list of for the other parts, with correct evaluation based on value, and how does this go multivariate? Although this looks out of place, it relates to series solutions of differential equations with more complex forms based on series of differentials. The integral of x.f(x)/x by parts as another giver of two more generators. The best bit is the “integral” from such a form is just evaluated at one endpoint (maybe subtraction for definite integrals) and as they include weighted series can be evaluated often by the series acceleration of a small number of differentials of the function to be integrated. The differentials themselves can be evaluated often accurately as a series converging as the delta is geometrically reduced with the improvements in the estimates being considered as new smaller terms in the series. So an integral evaluation might come down to (at 9 series terms per acceleration) about 2*90 function invocations instead of depending on the Simpson’s rule which has no series weighting to “accelerate” the summation. Also, integration up to infinity might be a simpler process when the limits are separated into two endpoint integrals as the summation over a limit to an estimation of convergence at infinity would not need as many conditional test cases on none, both and either one. As I think integrals should always return a function with parametric implicit constants, should not differentials return a parameterized function by default boolean the possibility of retrieving the faded constants? An offsetable self-recovery of diminished offset generic?**SeriesRanged[Executive[]][ … ]** - Free42 Android
- Perhaps an
**ACCESS**command for building new generators (with a need to get a single generated) with a**SETG**(to set the generator evaluating**ACCESS**) and**XEQG**can become just a set of things to put in**SETG “…”**making for easy generators of convergents and other structures.**GETG**for saving a small text string for nesting functions might be good but not essential and might confuse things by indirection possibilities. Just having a fixed literal alpha string to a**SETG**is enough as it could recall**ACCESS**operators on the menu like**MVAR**special programs (and not like**INPUT**programs).**XEQG**should still exist as there is the**SETG**combiner part (reducer) as well as the individual term generator (mapper)**XEQG**used for a variety of functions. This would make for easier operator definition (such as series functions by series accelerations or convergent limit differentials by similar on the reduction of the delta) without indirect alpha register calling of iterates. - A feature to make global labels go into a single menu item (the first) if they are in the same program, which then expands to all in the current program when selected for code management.
**+R**for addition with residual returning that fraction of the X that was not added to Y being returned in the X register and the sum returned in Y. This would further increase precision in some algorithms.

Rationale (after more thought and optimization)

- Restoring the stack is good for not having to remember what was there and if you need to store it. Requires a call stack frame connection so maybe
**SAVE?**is just call stack depth and so not required. (4 functions).**LOAD**,**SAVE**with some placing old loaded X into the last X with two commands before**LOAD**is called**USE**to indicate a stack consumption effect after restore and**MAKE**to leave one stack entry next lowest as an output. - Although local variables are good, in context variables would be nice to see. Clutter from other contexts is avoided or at least placed more keystrokes away from the main variables. This would also be easier to connect to the call stack frame. (3 functions)
**CONTXT**,**THIS**and**THAT**.**RCL**tries**CONTEXT**before the call stack program associated variables. No code spams variables into other namespaces.**STO**stores into its associated variable space. This ensures an import strategy. The**.END.**namespace can be considered an initial global space so the persistence of its content upon**GOTO . .**is useful so X**EQ “.END.”**should always be available. **INTEG**and**SOLVE**could be considered operators, but with special variables. Separation of the loop to reduce on from the map function makes more general summation functions possible given single term functions. It would be more general to have 3 commands so that the reducer, the mapper and the variable to map could be all set, but is that level necessary? Especially since in use, a common practice of setting the reducer and applying it to different maps seems more useful. But consistency and flexibility might have**PGMRED**,**PGMMAP**and**MAPRED “var”**for generality in one variable, with**ACCESS**in the reducer setting the right variable before executing the mapping. (4 functions).- Addition residual is a common precision technique. (1 function)
**+R**. - I’d also make
**SOLVE**and**INTEG**re-entrant (although not necessarily to a nested function call (a function already used in call stack frames stack check?)) by copying salient data on process entry along with**MAPRED**where the**PGMRED**set function can be used again and so does not need a nested reused check. - As to improvements in
**SOLVE**, it seems that detection of asymptotes and singularities confuses interval bisection. Maybe adding a small amount and subtracting a small amount move actual roots but leave singular poles alone swamped by infinity. Also, the sum series of the product of the values and/or gradients may or may not converge as the pole or zero is approached. - Don’t
**SAVE**registers or flags as this is legacy stuff. Maybe a quadratic (mass centroid) regression, Poisson distribution and maybe a few others, as the solver could work out inverses. Although there is the inconsistency of stack output versus variable output. Some way of auto-filling in MVAR from the stack and returns for 8 (or maybe 6 (XYZT in and X subtracted out, and …)) “variables” on the**SOLVS**menu? Maybe inverses are better functionality but the genericity of solvers are better for any evaluation. Allow**MVAR ST X**etc, with a phantom**SAVE**and have**MRTN**for an expected output variable before the subtraction making another “synthetic”**MVAR**or an exit point when not solving (and solving with an implicit**– RTN**and definite integrals being a predefinition of a process before a split by a subtractive equation for solving)? It would, of course, need**MVAR LAST X**to maybe be impossible (a reasonable constraint of an error speed efficiency certainty). (5+1 menu size). Redefinition of many internal functions (via no**MVAR**and automatic solver pre and postamble) would allow immediate inverse solves with no programming (**SOLVE ST****X**, etc., with no special**SOLVE RTN**as it’s a plain evaluation). This makes**MRTN**the only added command, and the extra**ST**modes on the**SOLVE**and also a way of function specification for inbuilt ones. The output to solve for can be programmatically set as the x register value when**PGMSLV**is executed and remembered when**SOLVE**is used next. - Register 24 is lonely. Perhaps it should contain weighted n,
*Σy*but no it already exists.*Σx*^{2}*y*seems better for the calculation of the weighted variance. That would lead to registers 0 to 10 being fast scratch saves. The 42 nukes other registers in**ALLΣ**anyway and I’d think not many programs use register 24 instead of a named variable. I’d be happy about only calculating it when in all mode, as I never switch and people who do usually want to keep register compatibility of routines for HP-41 code. Maybe**PVAR**for the n/(n-1) population variance transforms although this is an easy function to write by the user. A good metric to measure what gets added. Except for**+R**which is just looping and temporary variables for residual accumulation with further things to add assuming the**LAST Y**would be available etc. - I’d even suggest a
**QΣ**mode using all the registers 0 to 10 for extra statistical variables and a few of those reserved flags (flag 64). I think there is at least 1 situation (chemistry) where quadratic regression is a good high precision idea. This makes**REGS**saving a good way of storing a stats set. Making the registers count down from the stats base in this mode seems a good idea. The following would provide quadratic regression with lin, log, exp and pow relation mapping on top of it for a**CFIT**set of 8 along with the use of R24 above. An extra entry on the**CFIT MODL**menu with indicator**QΣ**for that enablement toggle of the extra shaping and register usage (flag 64 set) with an automatic enable of**ALLΣ**. As the parabolic constant would not be often accessed it would be enough to store it and the other ones after a fit, not interfering with live recalculation so as to not error by assumption. It would, of course, change the registers**CLΣ**sets to zero. Flag 54 can perhaps store the quadratic fitting model in**QΣ**mode. Quadratic Regression details. Although providing enough information to manufacture a result for the weighted standard deviation, it becomes optimal to decide to add**WSD**or an XY interchange mode on a flag to get inverse quadratic regression. Which would provide 12 regression curve options. The latter would need to extend the**REGS**array.**FCSTQ**might be better as a primary command to obtain the forecast root when the discriminant is square root subtracted negative as two forecast roots would exist. The most positive one would likely be more real in many situations. Maybe the linear correlation coefficient says something about the root to use and**FCSTQ**should use the other one?- R0 = correlation coefficient
- R1 = quadratic/parabolic constant
- R2 = linear constant
- R3 = intercept constant
- R4 =
*Σx*^{3} - R5 =
*Σx*^{4} - R6 =
*Σ(ln x)*^{3} - R7 =
*Σ(ln x)*^{4} - R8 =
*Σ(ln x)*^{2}y - R9 =
*Σx*^{2}ln y - R10 =
*Σ(ln x)*^{2}ln y

- Flags still being about on the HP-28S was unexpected for me. I suppose it makes me not want to use them. The general user flags of the HP-41 have broken compatibility anyway as 11 to 18 are system flags on the HP-42S. There would be flags 67, 78, 79 and 80 for further system allocations.
- I haven’t look if the source for the execution engine has a literal to address resolver with association struct field for speed with indirect handled by a similar manner, maybe even down to address function pointer filling in of checks and error routines like in a virtual dispatch table.
- If endpoint integrals provide wrong answers, then even the investigation into the patterns of deviation from the true grail summate to eventually make them right in time. A
**VirtualTimeOptimalIngelCover[]**is a**very abstract class**for me today. Some people might say it’s only an analytical partial solution to the problem.**DivergantCover[]**as a subclass of**IngelCover[]**which itself is a list container class of the type**IngelCover**. Not quite a set as removing an expansive intersection requires an addition of a**DivergentCover[].**It’s also a thing about series summation order commutativity for a possible fourth endpoint operator. **MultiwayTimeOptimizer[ReducerExecutive[]][IngelCover[MapExecutive[]][]]**and**ListMapExecutiveToReturnType[]**and the idea of method use object casting. And an**Ingel**of classes replaced the set of all classes.- I don’t use printing in that way. There’s an intermediate adapter called a PC tablet mix. The HP-41 was a system. A mini old mainframe. A convenience power efficiency method. My brother’s old CASIO with just P1 and P2 was my first access to a computational device. I’m not sure the reset kind of goto was Turing complete in some not enough memory for predicate register branch inlining.
- ISO 7 Layer to 8 Layer, insert at level 4, virtualized channel layer. Provides data transform between transmit optimally and compute optimally. Is this the DataTransport layer?
**Ingel[AutomaticExecutive[]][]**.- Paper
- (Media Codec)
- Symbols
- (Rate Codec)
- Envelope
- (Ring Codec) 3, 2 …
- Post Office
- (Drone codec)
- Letter Box
- (Pizza codec)
- Name
- (Index codec)
- Dear

- Adding
**IOT**as a toggle (flag 67) command in the**PRINT**menu is the closest place to IO on the Free42. Setting the print upload to a kind of object entity server. Scheduling compute racks with the interface problem of busy until state return. A command**CFUN**executes the cloud functions which have been “printed”. Cloud sync involves keeping the “printed” list and presenting it as an options menu in the style of**CATALOG**for all clouded things.**NORM**(auto-update publish (plus backup if accepted), merge remote (no global**.END.**)) and**MAN**(manual publish, no loading) set the sync mode of published things, while**TRACE**(manual publish, merge remote plus logging profile) takes debug logs on the server when**CFUN**is used but not for local runs. Merge works by namespace collision of local code priority, and no need to import remote callers of named function space.**LIST**sets a bookmark on the server. - An auto
**QPI**mode for both x and y. In the**DISP**menu. Flag mode on in register 67. Could be handy. As could a complex statistics option when the**REGS**array is made complex. It would be interesting to see options for complex regression. As a neural node functor, a regression is suitable for propagation adaptation via**Σ+**and**Σ-**