QMK Link Time

So having got the new build process working as expected (some python ~/.local cache storage issue), link time optimization provides almost 4 kB for future features. Latest master build this is quite a lot of feature space when considering such a small microcontroller. A nice feature added to the more modern build options.

I should be able to put together a nice Crimbo update with such a gargantuan amount of code memory, and no need to duplicate features the computer should do easily. In some ways the difficulty is making something extra which might get used.

Docs for amperzand is a little experiment in server languages I’m thinking about. Anonymous inner class? Sounds like it might be cool. So there is still about 3 kB left after double striked letters (for drop capitals perhaps). So some 46 macros to define before I have to get a little more creative in what’s available.

And now added automatic South Korean sylibilic combintions.  

K Ring CODEC Existential Proof

When p=2q. L(0) is not equal L(1).

Find n such that (L(0)/L(1))^(2n+1) defines the number of bias elements for a certain bias exceeding 2:1. This is not the minimal number of bias elements but is a faster computation of a sufficient existential cardinal order. In fact, it’s erroneous. A more useful equation is

E=Sum[(1-p)*(1-q)*(2n-1)*(p^(n-1))*q^(n-1)+((1-p)^2)*2n*(q^n)*p^(n-1),n,1,infinity]

Showing an asymmetry on pq for even counts of containment between adding entropic pseudo-randomness. So if the direction is PQ biased detection and subsample control via horizontals and verticals position splitting? The bit quantity of clockwise parity XOR reflection count parity (CWRP) has an interesting binary sequence. Flipping the clockwise parity and the 12/6 o’clock location inverts the state for modulation.

So asymmetric baryogenesis, that process of some bias in antimatter and matter with an apparently identical mirror symmetry with each other. There must be an existential mechanism and in this mechanism a way of digitizing the process and finding the equivalents to matter and antimatter. Some way of utilizing a probabilistic asymmetry along with a time application to the statistic so that apparent opposites can be made to present a difference on some time presence count.

Proof of Topological Work

A cryptocoin mining strategy designed to reduce power consumption. The work is divided into tiny bits of work with bits of stall caused by data access congestion. The extensive nature of solutions and the variance of solution time reduce conflict as opposed to a single hash function solve. As joining a fork increases splitting of share focuses the tree spread into a chain this has to be considered. As the pull request ordering tokens can expire until a pull request is logged with a solution, this means pull request tokens have to be requested at intervals and also after expiry while any solution would need a valid pull request token to be included in the pull request such that the first solution on a time interval can invalidate later pull requests solving the same interval.

The pull request token contains an algorithmic random and the head random based on the solution of a previous time interval which must be used to perform the work burst. It, therefore, becomes stupid to issue pull request tokens for a future time interval as the head of the master branch has not been fixed and so the pull request token would not by a large order be checksum valid.

The master head address becomes the congestion point. The address is therefore published via a torrent-like mechanism with a clone performed by all slaves who wish to become the elected master. The slaves also have a duty to check the master for errors. This then involves pull-request submissions to the block-tree (as git is) on various forks from the slave pool.

This meta-algorithm therefore can limit work done per IP address by making the submission IP be part of the work specification. Some may like to call it proof of bureaucracy.

The Cryptoclock

As running a split network on a faster clock seems the most effective hack, the master must set the clock by signed publication. On a clock split the closest modulo hashed time plus block slave salt wins. The slave throne line is on the closest modulo hashed values for salt with signed publication. This ensures a corrupt master must keep all slave salts (or references) in the published blocks. A network join must demote the split via a clock moderation factor. This ensures that culling a small subnet to run at a higher rate to disadvantage the small subnet is punished by the majority of neutrals on the throne line in the master elective on the net reunion, by the punitive clock rate deviation from the majority. As you could split and run lower in an attempt to punify!

Estimated 50 pounds sterling 2021-3-30 in bitcoin for the company work done 😀

The Rebase Compaction Bounty (Bonus)

Designed to be a complex task a bounty is set to compress the blockchain structure to a rebased smaller data equivalent. This is done by effectively removing many earlier blocks and placing a special block of archival index terminals for non-transferred holdings in the ancient block history. This is bound to happen infrequently to never and set at a lotto rate depending on the mined percents. This would eventually cause a work spurt based on the expected gain. The ruling controlling the energy expenditure versus the archival cost could be integrated with the wallet stagnation (into the void) by setting a wallet timeout of the order of many years.

A form of lotto inheritance for the collective data duplication cost of historic irrelevance. A super computational only to be taken on by the supercomputer of the age. A method therefore of computational research as it were, and not something for everybody to do, but easy for everybody to check as they compact.

Gradients and Descents

Consider a backpropagation which has just applied to a network under learning. It is obvious that various weights changed by various amounts. If a weight changes little it can be considered good. If a weight changes a lot it can be considered an essential definer weight. Consider the maximal definer weight (the one with the greatest change) and change it a further per cent in its defined direction. Feedforward the network and backpropagate again. Many of the good weights will go back to closer to where they were before definer pass and can be considered excellent. Others will deviate further and be considered ok.

The signed tally of definer(3)/excellent(0)/good(1)/ok(2) can be placed as a variable of programming in each neuron. The per cent weight to apply to a definer, or more explicitly the definer history deviation product as a weight to per cent for the definer’s direction makes a training map which is not necessary for using the net after training is finished. It does however even further processing such as “excellent definer” detection. What does it mean? 

In a continual learning system, it indicates a new rationale requirement for the problem as it has developed an unexpected change to an excellent performing neuron. The tally itself could also be considered an auxiliary output of any neuron, but what would be a suitable backpropagation for it? Why would it even need one? Is it not just another round of input to the network (perhaps not applied to the first layer, but then inputs don’t always have to be so).

Defining the concept of definer epilepsy where the definer oscillates due to weight gradient magnification implies the need for the tally to be a signed quantity and also implies that weight normalization to zero should also be present. This requires but has not been proven as the only sufficient condition that per cent growth from zero should be weighted slightly less than per cent reduction toward zero. This can be factored into an asymmetry stability meta.

A net of this form can have memory. The oscillation of definer neurons can represent state information. They can also define the modality of the net knowledge in application readiness while keeping the excellent all-purpose neurons stable. The next step is physical and affine coder estimators.

Limit Sums

The convergence sequence on a weighting can be considered isomorphic to a limit sum series acceleration. The net can be “thrown” into an estimate of an infinity of cycles programming on the examples. Effectiveness can be evaluated, and data estimated on the “window” over the sum as an inner product on weightings with bounds control mechanisms yet TBC. PID control systems indicate in the first estimate that differentials and integrals to reduce error and increase convergence speed are appropriate factors to measure.

Dynamics on the per cent definers so to speak. And it came to pass the adaptivity increased and performance metrics were good but then irrelevant as newer, better, more relevant ones took hold from the duties of the net. Gundup and Ciders incorporated had a little hindsight problem to solve.

Fractal Affine Representation

Going back to 1991 and Micheal Barnsley developing a fractal image compression system (Iterrated Systems FIF file format). The process was considered computationally intensive in time for very good compression. Experiments with the FIASCO compression system which is an open-source derivative indicate best performance lies in low quality (about 50%) is very fast, but not exact. If the compressed image is subtracted from the input image and further compressed as a residual a number of times, performance is improved dramatically.

Dissociating secondaries and tertiaries from the primary affine set allows disjunct affine sets to be constructed for equivalent compression performance where even a zip compression can remove further information redundancy. The affine sets can be used as input to a network, and in some sense, the net can develop some sort of affine invariance in the processed fractals. The data reduction of the affine compression is also likely to lead to better utilization of the net over a convolution CNN.

The Four Colour Disjunction Theorem.

Consider an extended ensemble. The first layer could be considered a fully connected layer distributor. The last layer could be considered to unify the output by being fully connected. Intermediate layers can be either fully connected or colour limited connected, where only neurons of a colour connect to neurons of the same colour in the next layer. This provides disjunction of weights between layers and removes a completion upon the gradient between colours.

Four is really just a way of seeing the colour partition and does not really have to be four. Is an ensemble of 2 nets of half size better for the same time and space complexity of computation with a resulting lower accuracy of one colour channel, but in total higher in discriminatory performance by the disjuction of the feature detection?

The leaking of cross information can also be reduced if it is considered that feature sets are disjunct. Each feature under low to non detection would not bleed into features under medium to high activation. Is the concept of grouped quench useful?

Query Key Transformer Reduction

From a switching idea in telecommunications, an N*N array can be reduced to a mostly functional due to sparsity N*L array pair and an L*L array. Any cross-product essentially becomes  (from its routing of an in into an out) a set of 3 sequential routings with the first and last being the compression and expansion multiplex to the smaller switch. Cross talk grows to some extent, but this “bleed” of attention is a small consideration given the fact that the variance spread of having 3 routing weights to product up to the one effective weight and computation is less due to L being a smaller number than N.

The Giant Neuron Hypothesis

Considering the output stage of a neuronal model is a level sliced integrator of sorts, the construction of RNN cells would seem obvious. The hypothesis asks if it is logical to consider the layers previous to an “integration” layer effectively an input stage where the whole network is a gigantic neuron and integration is performed on various nonlinear functions. Each integration channel can be considered independent but could also have post layers for further joining integral terms. The integration time can be considered another input set for per integrator functional.  To maintain tensor shape as two inputs per integrator are supplied the first differential would be good also especially where feedback can be applied.

This leads to the idea of the silicon conectome. Then as now as it became, integration was the nonlinear of choice in time (a softmax divided by the variable as goes with [e^x-1]/x. A groovemax if you will). The extra net uninueron integration layer offering the extra time feature of future estimation at an endpoint integral of network evolved choice. The complexity of backpropagation of the limit sum through fixed constants and differentiable functions for a zero adjustable layer insert with scaled estimation of earlier weight adjustment on previous samples in the time series under integration for an ideal propergatable. Wow, that table’s gay as.

This network idea is not necessarily recursive, and may just be an applied network with a global time delta since last evaluation for continuation of the processing of time series information. The actual recursive use of networks with GRU and LSTM cells might benefit from this kind of global integration processing, but can GRU and LSTM be improved? Bistable cells say yes, for a kind of registered sequential logic on the combinationals. Consider that a Moore state machine layout might be more reductionist to efficiency, a kind of register layer pair for production and consumption to bracket the net is under consideration.

The producer layer is easily pushed to be differentiable by being a weighted sum junction between the input and the feedback from the consumer layer. The consumer layer is more complex when differentiability is considered. The consumer register really could be replaced by a zeroth differential prediction of the future sample given past samples. This has an interesting property of pseudo presentation of the output of a network as a consumptive of the input. This allows use of the output in the backpropergation as input to modify weights on learning the feedback. The consumer must be passthrough, in its input to output while storage of samples for predictive differential generation is allowed.

So it’s really some kind of propergational Mealy state machine. A MNN if you’d kindly see. State of the art art of the state. Regenerative registration is a thing of the futured.

AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

QtAp Getting Better

So the app is getting better. The “interfaces” for the extensions have been defined for now, and just doing the last functions of UTF import to bring it up to the level of building the first view. The command menu has been roughly defined, and changes based on the view.

Qt so far is quite nice to use. I have found as an experienced C/Java coder, much of the learning curve is not so much finding the right classes, but the assumptions one has to make on the garbage collection and the use of delete. In some cases, it is obvious with some thought (local variable allocation, and automatic destruction after use), while in others not so (using a common QPlainTextDocument in multiple widgets and removing the default ones). Basic assumption says pointer classes have to be manually handled.

https://github.com/jackokring/qtap/blob/master/classfilter.h is a category filter based on an extensible bloom filter. The .c file is in the same directory.

N.B. It’s so funny that some “amazing” hackers can bring down this sub $10 server. Way to go to show off your immense skill. A logline 142.93.167.254 – – [19/Jan/2020:08:38:01 +0000] “POST /ws/v1/cluster/apps/new-application HTTP/1.1” 403 498 “-” “python-re$ … etc. I’m dead sure no such thing exists on this server. And the /wp-login automated port 80 hammering for services not offered.

But enough of the bad news, when something along the lines of maximal entropic feature virtualization sounds like something nice (or not). Who knows? What’s involved? Somekind of renormal on the mapping of k-means for a spread which is morphing the feature landscape to focus or diverge the areas to be interpreted?

QtAp Release v0.1.13

GitHub Pages

It’s not great, but quite a nice experience with undo/redo, and Git integration. I even added the translation engine as part of the release, but have done no actual translations. It’s a better app initially as it includes some features that will consume time to add to the example notepad app.

Also in the background there is quite a bit which has been done which is ready as soon as the app develops, such as the interception of the action bar such that right click can show hide which sections are visible, and this is saved as part of the restored geometry.

0.1.13 QtAp Releases and Development

The getting of the greying out of menus and the action buttons depending on state was a nice challenge to learn the signal slot methods Qt uses. The tray is automatically generated too depending on the calls to the addMenu function, and the setting of flags to indicate state response routing.

I’m likely to even build a JavaScript host in there for the user and add in some extras for it, as this seems the most obvious way of scripting in the browser era. There is also possibilities to build new views by QML and so allow some advanced design work under the hood, while maintaining a hybrid approach to code implementation.

I cheated quite a lot by having a dependency on Git and so SSH. I’m not sure I even need the socket interface as long as I do some proxy code in JS to move data to and from C/C++.

Moving on to adding features to the interface, and a command menu which has selections based on the current active view. This could be done by buttons, but actions are better as they have a better shortcut method, and easier automatic accesibility tool interfacing.

The icon set likely needs a bit of a spruce up, and matching with some sensible default. Maybe adding in the cancelation of a bash sequence so that anti-commands can be supplied in a list, and run if it makes sense to reverse. Maybe later, later.

EDIT: The overriding of a class when it is attached to a GUI is slightly complex. I found the easiest way (not necessarily the most efficient as it depends on how the autogenerated setupUi function saves memory when not executed. The super class needs a simple bool stopUi = false with the extending subclass just passing this second parameter as true and putting an if(!stopUi) execution guard before the super class ui->setupUi(this) call in the constructor.

This allows QObject(parent) to be replaced by superClass(parent) to inherit all the “interface”. There maybe other ways using polymorphism, but none as easy.

More Qt 5

So I have the notepad app working sufficiently today for thinking about the next steps in its expansion beyond some simple idiot proofing of the git handling. It should be very easy to add more views, and a view handler for abstract views of data processing of the text which is in the main view.

The stylesheet needs work, but it’s still a matter of following the default themes (at least on Linux), and then that should be fine. The mimetype handling perhaps needs some alteration, as .v files also seem to be text.

Now it’s on to making the view list work so as to generate alternate computed views from the input notepad. And yes a blank view is available but accessed via when saved state is changed and sets available calculations.

Ideas in AI

It’s been a few weeks and I’ve been writing a document on AI and AGI which is currently internal and selective distributed. There is definitely a lot to try out including new network arrangements or layer types, and a fundamental insight of the Category Space Theorem and how it relates to training sets for categorization or classification AIs.

Basically, the category space is normally created to have only one network loss function option to minimise on backpropagation. It can be engineered so this is not true, and training data does not compete so much in a zero-sum game between categories. There is also some information context for an optimal order in categorization when using non-exact storage structures.

Book Published in Electronic Format. Advanced Content not Beginner Level. Second Edition may Need a Glossary.

The book is now live at £3 on Amazon in Kindle format.

It’s a small book, with some bad typesetting, but getting information out is more important for a first edition. Feedback and sales are the best way for me to decide if and what to put in a second edition. It may be low on mathematical equations but does need an in-depth understanding of neural networks, and some computer science.

AI as a Service

The product development starts soon, from the initials done over the last few weeks. An AI which has the aim of being more performant per unit cost. This is to be done by adding in “special functional units” optimized for effects that are better done by these instead of a pure neural network.

So apart from mildly funny AaaS selling jokes, this is a serious project initiative. The initial tests when available will compare the resources used to achieve a level of functional equivalence. In this regard, I am not expecting superlative leaps forward, although this would be nice, but gains in the general trend to AI for specific tasks to start.

By extending the already available sources (quite a few) with flexible licences, the building of easy to use AI with some modifications and perhaps extensions to open standards such as ONNX, and onto maybe VHDL FPGA and maybe ASIC.

Simon Jackson, Director.

Pat. Pending: GB1905300.8, GB1905339.6

Today’s Thought


import 'dart:math';

class PseudoRandom {
  int a;
  int c;
  int m = 1 << 32;
  int s;
  int i;

  PseudoRandom([int prod = 1664525, int add = 1013904223]) {
    a = prod;
    c = add;
    s = Random().nextInt(m) * 2 + 1;//odd
    next();// a fast round
    i = a.modInverse(m);//4276115653 as inverse of 1664525
  }

  int next() {
    return s = (a * s + c) % m;
  }

  int prev() {
    return s = (s - c) * i % m;
  }
}

class RingNick {
  List<double> walls = [ 0.25, 0.5, 0.75 ];
  int position = 0;
  int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
  int leastEscaped = 2;//the highest walls 0.5 and 0.75
  int theThird = 0;//the 0.75 and 0.25 walls
  bool right = true;
  PseudoRandom pr = PseudoRandom();

  int _getPosition() => position;

  int _asMod(int pos) {
    return pos % walls.length;
  }

  void _setPosition(int pos) {
    position = _asMod(pos);
  }

  void _next() {
    int direction = right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.next() > (wall * pr.m).toInt()) {
      //jumped
      _setPosition(position + (right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce
    }
  }

  void _prev() {
    int direction = !right ? 0 : walls.length - 1;//truncate to 2
    double wall = walls[_asMod(_getPosition() + direction)];
    if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
      //jumped
      _setPosition(position + (!right ? 1 : walls.length - 1));
    } else {
      //not jumped
      right = !right;//bounce -- double bounce and scale before sync
    }
    pr.prev();//exact inverse
  }

  void next() {
    _next();
    while(_getPosition() == mostEscaped) _next();
  }

  void prev() {
    _prev();
    while(_getPosition() == mostEscaped) _prev();
  }
}

class GroupHandler {
  List<RingNick> rn;

  GroupHandler(int size) {
    if(size % 2 == 0) size++;
    rn = List<RingNick>(size);
  }

  void next() {
    for(RingNick r in rn) r.next();
  }

  void prev() {
    for(RingNick r in rn.reversed) r.prev();
  }

  bool majority() {
    int count = 0;
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
    return (2 * count > rn.length);// the > 2/3rd state is true
  }

  void modulate() {
    for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
      r._setPosition(r.theThird);
    } else {
      //mostEscaped eliminated by not being used
      r._setPosition(r.leastEscaped);
    }
  }
}

class Modulator {
  GroupHandler gh = GroupHandler(55);

  int putBit(bool bitToAbsorb) {//returns absorption status
    gh.next();
    if(gh.majority()) {//main zero state
      if(bitToAbsorb) {
        gh.modulate();
        return 0;//a zero yet to absorb
      } else {
        return 1;//absorbed zero
      }
    } else {
      return -1;//no absorption emitted 1
    }
  }

  int getBit(bool bitLastEmitted) {
    if(gh.majority()) {//zero
      gh.prev();
      return 1;//last bit not needed emit zero
    } else {
      if(bitLastEmitted) {
        gh.prev();
        return -1;//last bit needed and nothing to emit
      } else {
        gh.modulate();
        gh.prev();
        return 0;//last bit needed, emit 1
      }
    }
  }
}

class StackHandler {
  List<bool> data = [];
  Modulator m = Modulator();

  int putBits() {
    int count = 0;
    while(data.length > 0) {
      bool v = data.removeLast();
      switch(m.putBit(v)) {
        case -1:
          data.add(v);
          data.add(true);
          break;
        case 0:
          data.add(false);
          break;
        case 1:
          break;//absorbed zero
        default: break;
      }
      count++;
    }
    return count;
  }

  void getBits(int count) {
    while(count > 0) {
      bool v;
      v = (data.length == 0 ? false : data.removeLast());//zeros out
      switch(m.getBit(v)) {
        case 1:
          data.add(v);//not needed
          data.add(false);//emitted zero
          break;
        case 0:
          data.add(true);//emitted 1 used zero
          break;
        case -1:
          break;//bad skip, ...
        default: break;
      }
      count--;
    }
  }
}

Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

More VST ideas and RackAFX

Looking into more instrument ideas, with the new Steinberg SDK and RackAFX. This looks good so far with a graphical design interface and a bit of a curve on Getting Visual Studio up to the compiling. A design the UI and then some fill in the blanks with audio render functions. Looks like it will cut development time significantly. Not a C beginner tool, but close.

It’s likely going to be an all in one 32 bit .dll file with midi triggering the built-in oscillation and a use as a filter mode too. Hopefully some different connected processing on the left and right. I want the maximum flexibility without going beyond stereo audio, as I am daw limited. The midi control may even be quite limited, or even not supported in some daw packages. This is not too bad as the tool is FX oriented, and midi is more VSTi.

Na, scratch that, I think I’ll use an envelope follower and PLL to extract note data. So analog and simplifies the plugin. Everything without an easy default excepting the DSP will not be used. There is no reason to make anymore VSTi, and so just VST FX will be done.

Looks like everytime you use visual studio it updates a few gig, and does nothing better. But it does work. There is a need for a fast disk, and quite a few GB of main memory. There is also a need to develop structure in the design process.

The GUI is now done, and next up is the top down class layout. I’ve included enough flexibility for what I want from this FX, and have simplified the original design to reduce the number of controllers. There is now some source to read through, and perhaps some examples. So far so good. The most complex thing so far (assuming you know your way around a C compiler, is the choice of scale on the custom GUI. You can easily get distracted in the RackAFX GUI, and find the custom GUI has a different size or knob scale. It’s quite a large UI I’m working on, but with big dials and a lot of space. Forty dials to be exact and two switches.

I decided on differing processing on each stereo channel, and an interesting panning arrangement. I felt inspired by the eclipse, and so have called it Moon. An excellent WebKnobMan is good at producing dial graphics for custom knobs. The few backgrounds in RackAFX are good enough, and I have not needed gimp or photoshop. I haven’t needed any fully custom control views, and only one enum label changing on twist.

Verdict is, cheap at the price, is not idiot proof, and does need other tools if the built in knobs are not enough. I do wonder if unused resources are stripped from the .dll size. There are quite a few images in there. I did have problems using other fonts, which were selectable but did not display or make an error. Bitmaps would likely be better.

The coding is underway, with the class .h files almost in the bag, and some of the .cpp files for some process basics. A nice 4 pole filter and a waveshaper. Likely I will not bother with sample rate resetting without a reload. It’s possible, but if your changing rate often, you’re likely weird. Still debating the use of midi and vector joy controller. There is likely a user case. Then maybe After this I’ll try a main synth using PDE oscillators. It is quite addictive VST programming.

I wonder what other nice GUI features there are? There is also the fade bypass I need to do, and this maybe joined with the vector joy. And also pitch and mod wheel perhaps. Keeping this as unified control does look a good idea. Project Moon is looking good.

The Cloud Project

So far I’m up to 5 classes left to fill in

  • SignedPublicKey
  • Server
  • Keys
  • AuditInputStream
  • ScriptOutputStream

They are closely coupled in the package. The main reason for defining a new SignedPublicKey class is that the current CA system doesn’t have sufficient flexibility for the project. The situation with tunnel proxies has yet to be decided. At present the reverse proxy tunnel over a firewall ia based on overiding DNS at the firewall, to route inwards and not having the self as the IP for the host address. Proxy rights will of course be certificate based, and client to client link layer specific.

UPDATE: Server has been completed, and now the focus is on SignedPublicKey for the load/save file access restrictions. The sign8ng process also has to be worked out to allow easy use. There is also some consideration for a second layer of encryption over proxy connection links, and some decisions to be made on the server script style.

The next idea would be a client specific protocol. So instead of server addresses, there would be a client based protocol addressing string. kring.co.uk/file is a server domain based address. This perhaps needs extending.

Musical Research

I’ve looked into free musical software of late, after helping out setting up a PC for musical use. There’s the usual Audacity, and feature limited, but good LMMS. Then I found SuperCollider, and I am now hooked on building some new synthesis tools. I started with a basic FM synth idea, and moved on to some LFO and sequencing features. It’s very nice. There is some minor annoyance with the SC language, but it works very well, and has access to the nice Qt toolkit for making GUIs. So far I’ve implemented the controls as a MIDI continuous controller bus proxy, so that MIDI in is easy. There will be no MIDI out, as it’s not that kind of be all thing. I’ve settled on a 32 controls per MIDI channel, and one window focus per MIDI channel.

I’ve enjoyed programming it so far. It’s the most musical I’ve been in a long while. I will continue this as a further development focus.

The situation so far https://github.com/jackokring/supercollider-demos

The last three, are just an idea I had to keep the keyboard as a controller, and make all the GUI connection via continuous controllers only. I’ll keep this up to date as it develops.