Universal Decision Elements in 4-valued Logic

back to first page

The Results

In March 2012, after a couple of weeks of rediscovering the joys of IFs and DOs, the program was up and running and the answer was expected in short order.

John Loader's one-unused-entry-point UDE was soon breeding lots more. The trouble was, for every UDE examined, more than one new one was being produced. Typically, every 100 UDEs would produce around 180 new ones.

Each time the program went round its 'superloop' it would read the file of UDE/EP sets back in, and because sets with the most unused EPs were at the start of the file, each superloop would start analysing the UDE/EP set with the largest number of unused EPs.

After a few days it became evident that the number of UDEs was huge. The rate of production of UDEs with 1 unused EP showed no signs of tailing off.

It began to look like the number of UDE/unused-entry-point sets (let alone the number of UDEs) was far too vast to be discoverable on a laptop. Particularly so when the program began to produce UDE/EP sets with not just one or two unused EPs, but with 6 unused EPs.

Now, a UDE/EP set with 6 unused entry points equates to 46 - 4096 - UDEs. Each of these 4096 UDEs may generate many sets of unused entry points. One particular UDE/EP set, with 6 unused entry points, produced over 63 million permutations of substitutions. Though most of these were duplicates or subsets amongst themselves or produced duplicates of UDE/EP sets already generated.

But it was becoming clear that the task was far larger than anticipated. Examining 63 million permutations of substitutions for who knew how many UDEs looked like a mountain far too high to climb.

The program then produced a UDE/EP set with 7 unused entry points, and one of its (16K) UDEs produced a UDE that required the analysis of over 278 million permutations of substitutions. It seemed the task was hopeless.

This depression was only deepened when, a few days later, a UDE with 11 unused entry points appeared. That alone equates to 4,194,304 UDEs, all clamouring to be analysed, any one of which might need the analysis of millions of permutations of substitutions. Completely impossible on a laptop.

Nevertheless, the program was given its head and a few days later UDEs with 11, then 12, then 13, then 14 unused EPs were being produced. A UDE/EP set with 14 unused EPs equates to 268 million UDEs. No way was my laptop ever going to churn through all of them.

However, an hypothesis formed: perhaps a UDE with even more unused EPs would turn out to be a superset of UDEs with fewer unused EPs. So perhaps a UDE with 14 unused EPs would knock out many of those with 13 or fewer and make the task practicable after all.

But it is not so. Yes, a UDE with 14 unused EPs may well knock out some lower ones as subsets, but it produces far more new sets than it thus eliminates. The task continued to grow.

After a few weeks of running approximately 100,000 UDE/EP sets had been produced. An analysis was done of which entry points they left unused. Of the 256 entry points, the 100,000 UDEs had 222 of them unused at least once. The most popular entry point found itself to be unused over 20,000 times - i.e. it was an unused entry point in around 20% of the 100,000 sets. Perhaps there would be some pattern that could be exploited here to speed up the process. But, like the superset hypothesis, this analysis of which EPs were unused did not prove fruitful.

A few days on and UDEs with 15 unused entry points started turning up. Each equates to 415 UDEs - around a billion. So at least it was clear the number of universal decision elements in four valued logic was certainly many billions and possibly a lot more than that. The poor little IBM 1130 had stood no chance back in 1972!

A few days on and UDEs with 15 unused EPs were being produced and at quite a rate, and ones with 16 and even 17 unused EPs were appearing. The program was changed so that it only stored UDE/EP sets with 16 or 17 unused EPs. But after several days no more UDEs with 17 EPs appeared and it seemed like we might be homing in on the answer.

Change of Direction

The original purpose, to generate all UDE/unused entry point (UDE/EP) sets and thus to discover how many UDEs there are, was clearly impractical given the huge numbers involved. So the aim became to analyse all UDEs with 16 and 17 unused EPs to determine if they were the top of the food chain and if so how many of them there were.

But no. A couple of UDEs with 18 unused entry points popped up. They bred more 18s and then a couple of UDEs with 19s. The program was modified to delete the (now ten a penny) 16s and 17s and work on the 18s and 19s.

Then one morning I got up to find the program had had a busy night. It had discovered 4,403 UDEs with 19 unused EPs and 159 UDEs with 20 unused EPs. The 18s were relegated to the recycle bin. The program was again modified to store only UDEs with 19 or more unused EPs. And a few hours later 21s appeared, and then a few 22s.

After a week of extensive running, the program seemed to peak with the production of UDEs with 23 unused EPs. The rate of production of new UDEs with 22 or 23 unused EPs slowed to a trickle.

Each UDE with 23 unused EPs (there were 367 of them) is equivalent to 423 (64 trillion, 64 x 1012) individual UDEs. So it seemed the answer was there were something in the region of 64 trillion first order universal decision elements of 4 argument places in 4-valued logic.

Running was about to be abandoned and the whole project put aside when the 367 UDEs accidentally got deleted. Finger trouble. Very annoyed, and determined to get them back, the program was set to run again.

After painfully slow re-production of the 23s, a UDE with 24 EPs appeared, then one with 25. And then one with 26 unused entry points. Then a 27.

However, despite some long runs, no UDEs with more than 27 unused entry points appeared, quite possibly because of the inability to analyse even a meaningful fraction of the 16 quadrillion UDEs that each UDE/27-unused-entry-point set can generate. Running was ceased.

One Year and a Half Years Later (December 2013)

A year and a half later (December 2013) an email on an unrelated subject spurred renewed interest in the program.

When running had stopped 18 months before, there were 26 UDEs with 27 unused entry points. Seventeen of the unused entry points were the same in all of these 26 UDEs.

Based on a hypothesis that ALL UDEs with 27 unused EPs will have these same 17 unused EPs, the program was changed to leave the values at these 17 EPs at 0 (on the basis that the value in these EPs does not matter if they will be unused in the resulting UDEs) and then try all combinations of values in the other 10 unused EPs. It was practical to analyse all of the (410) resulting UDEs.

Unfortunately this did not yield any further UDEs with 27 unused EPs.

However, it did trigger a new idea: rather than trying to run all combinations of values in unused EPs, how about assigning values 0,1,2, or 3 randomly to all of the unused EPs? So, rather than trying to analyse the impossibly large 427 UDEs that result from doing all combinations of values in the 27 unused EPs, just run a small number of random value combinations.

In the first run, for each starting UDE/EP set, an experimental 1100 combinations of random values in the 27 unused EPs were tried. Very surprisingly it worked. In the first run the number of UDEs with 27 unused EPs went from 26 to 157.

The hypothesis that 17 of the entry points will be the same in all UDEs with 27 unused EPs was disproved. Amongst the 157 UDEs with 27 unused EPs, only 5 EPs were shared by all of them.

A UDE with, say, 27 unused EPs will not generate another UDE with 27 different unused EPs. Perhaps 25 will be the same and only two EPs in the new UDE will be different. This implies that the values in the 25 EPs that will be unused in the new UDE are irrelevant. Only the values in the two EPs that will not feature in the new UDE are significant. There are only 16 combinations of values for those two EPs. Assigning 50 sets of random values gives over 96% probability that all 16 combinations will be generated in those (and any other pair) of EPs.

So it was realised that while trying to generate all combinations of values in 27 EPs would require an impossible 427 combinations to be tried, a mere 50 random combinations gave very good results. Indeed, runs trying 1000 random combinations which followed runs that had tried 50, yielded very few additional UDEs.

Renewed interest

Buoyed by the success of the random values in unused EPs idea, experiments were done and a week on it was clear that as few as 30 random sets of values in the 27 unused EPs produced more than one new UDE/EP set for each set anaylsed. Examining 30 UDEs is a lot more practical than 427.

After another week, many more runs had been done. The rate of production of new UDE/unused-entry-point sets had slowed. Despite extensive running no UDE with more than 27 unused entry points was found.

Given the past pattern, that once a UDE with n unused EPs had been found, sets with n+1 unused EPs soon emerged, a tentative conclusion was drawn that perhaps there are no UDEs with 28 unused entry points.

If that is the case, the number of first order Universal Decision Elements of four argument places in 4-valued logic may be of the order of 427. The 5620 UDEs with 27 unused entry points share many of those unused entry points, so the number of unique UDEs they will generate is far fewer than 5620 x 427 - maybe nearer 3 x 427. Although there would seem to be huge numbers of UDEs with fewer than 27 unused EPs, it seemed unlikely there would be orders of magnitude more of them than 5620 x 427.

The program had apparently run its course.

As an experiment, another version of the program was written. The idea was simply to generate all 4256 truth tables and analyse each one to see if it is a UDE. As mentioned earlier, this would take a supercomputer an almost infinite amount of time to do, but it would be interesting to see if this approach could at least find some UDEs.

It didn't. After generating millions of random truth tables by putting 0,1,2,3 in each of the 256 entry points no UDEs were found. This is perhaps not surprising as the number of possible truth tables is 4256 and the number of these which represent UDEs is only, say, ~427. A miniscule fraction. Like looking for a needle not so much in a haystack but in a whole galaxy of haystacks.

Back to basics

Having run the program several times, no more than the 5620 UDEs with 27 unused entry points appeared.

However, going back to basics, a look at the breakdown of which entry points are used by which substitutions revealed something interesting. (This data - the entry points used by the 369 substitutions - is constant and does not change with the UDE being examined.)

Four entry points are each used by 15 substitutions. The remaining 252 entry points are used either by 4, 5, 6 or 8 substitutions.

One might expect that entry points used by 15 substitutions have less chance of being unused than EPs used by fewer substitutions. This is indeed the case: examining all the UDEs with 26 or 27 unused EPs revealed that none of the four EPs that are used by 15 substitutions was ever unused by them.

As a corollary, one might expect the entry points that are only used by 4 substitutions to be often unused by UDEs. This too proved to be the case: all of them were unused by one or more UDEs. However, there were several entry points that are used by 5 substitutions that were not unused in any of the UDEs having 26 or 27 unused EPs. But looking back at UDEs with fewer than 26 unused EPs (which are no longer included in runs as there would be far too many of them) revealed some did have some of these EPs unused.

This prompted another experiment. Half a dozen UDEs with only 23 unused EPs which had unused EPs that are used by only 5 substitutions, were put as the only input to the main program. Within an hour of running they had bred not only many UDEs with 24, then 25, then 26 unused EPs, but also some with 27 unused EPs. This is a vindication of the random value in unused EP approach: it accomplished in an hour what it had taken weeks of running with the initial version of the main program. However, none of the UDEs with 26 and 27 unused EPs had the used-by-5-substitutions unused entry point that was present in the seed 23 unused EPs UDEs.

But much more significantly, within 2 hours of running, UDEs with 28 unused EPs appeared. And soon thereafter UDEs with 29. There were now 4 with 29, 398 with 28 and 17790 with 27 (versus the 5620 27s it had taken a week to produce earlier!).

So, having manually selected some 23-unused-EP UDEs and had such surprising results there was clearly much more work to do: to go back and find UDEs which between them have as unused, all the EPs used by 5 substitutions and see if any of them also breed new families of UDEs with high numbers of unused EPs.

Having thought (not for the first time) that the end of the road had been reached, there was clearly a lot of exploring still to be done.

Revised Approach

A revised and much faster method was then adopted which resulted in the program running at least ten times faster, for the following reasons.

Previously, the program was allowed to generate and record any UDE with the requisite number of unused EPs. So in a run recording UDEs with, say, 15 or more unused EPs, all (non-duplicate, non-subset) UDE/EP sets with 15+ unused EPs were recorded. But now, the file of UDE/EP sets is examined and a list is produced of EPs that are not unused by any of the UDE/EP sets***. Newly generated UDEs are only recorded if they have one of the EPs on the list. This greatly limits the number of UDE/EP sets that get recorded, which greatly speeds up the process by which duplicates and subsets are eliminated (because there are so many fewer to process).
(*** Actually it looks for EPs that are not unused fewer than (an arbitrary) 1000 times so the gene pool is not limited to just one UDE with any given unused EP.)

Further, the test of whether the UDE being examined has one of the EPs on the list is applied at the point in the process when EPs that must be used have been identified, but before permutations of substitutions are generated and examined. Since this is by far the most processing-intensive part of the process, eliminating it for the vast majority of contender UDEs speeds up the processing greatly. UDEs that pass this test proceed to the permutations of substitution part of the process, and each set of unused EPs thus produced is again examined to see if it still contains at least one of the EPs from the list. Only if it does does the UDE/EP set get recorded.

This increase in speed, combined with the switch to using random values in unused EPs, meant the program was something like 100 times more productive than it was originally. This means far more UDE/EP sets can be generated thereby increasing the size and diversity of the UDE/EP sets gene pool from which high-unused-EP UDEs might evolve.

Indeed, the limitation became not the speed at which UDE/EP sets could be analysed and generated, but the size of the compiled program: an array of 700,000 256 digit UDEs is about the limit for this laptop's memory. In the light of this newly evident limit, the program needed to be revised. The challenge had become storing large numbers of UDE/EP sets (and also the time it was taking to compare each new UDE/EP set to previously found sets to see whether it is a duplicate, subset or superset of a previous set).

But before that was undertaken, running was restarted by going back to files containing UDEs with as few as 15 unused entry points. The program was run until 15-unused-EPs-UDEs had been generated which between them had all but 4 of the 256 EPs unused.

Looking at the table of which EPs are used by which substitutions reveals that EPs 1, 87, 171 and 256 are used by 15 substitutions. Because these EPs are used so frequently (none of the other 252 is used by more than 8 substitutions), they are very unlikely to ever be unused. Indeed, no UDE with 15 or more unused EPs has been found which has any of these 4 EPs as unused. So, the run was stopped when 15-unused-EPs UDEs had been generated amongst which all of the other 252 EPs were unused at least 40 times.

The program was run again, this time focusing on 16-unused-EPs UDEs until all 252 EPs were unused 40 times by them.

This was repeated for UDEs with 17, 18, 19, 20, 21, then 22 unused EPs and worked well. However, for 23-unused-EP UDEs, despite several long runs, none was found with a further two EPs (170 and 187, making 6 in total) unused. For the 24-unused-EP UDEs, a further 4 (13, 49, 70 and 167) were never unused.

The aim was to produce UDEs which between them had as many as possible unused EPs. In previous runs there were many EPs that were never unused, limiting the gene pool from which to generate UDEs with higher numbers of unused EPs. This would explain why an earlier run stalled at UDEs with 27 unused EPs.

UDEs with n unused EPs tend to be generated from UDEs with at least n-2 unused EPs. In other words, UDEs with 16+ unused EPs will generate UDEs with 18 unused EPs, but UDEs with 15 are much less likely to.

So, to keep the file of UDE/EP sets down to processable numbers (up to 1,000,000), for a run to generate UDEs with 22+ unused EPs, UDE/EP sets with 19 or fewer unused EPs are removed from the file. When 23+ unused EP UDEs which use all 252 EPs have been generated, UDEs with 20 unused EPs are removed from the file. Etc.

As of December 17th 2013, there were 4 UDE/EP sets with 29 unused EPs, 520 with 28 and 23,819 with 27. Despite having thousands of combinations of values tried in their unused EPs, the 29s show no sign of multiplying, nor do the 28s or 27s seem able to produce more 29s.

So a new run was tried starting with a single UDE which had only 1 unused EP. The program only allowed a UDE/EP set to be recorded if it had an unused EP that has been unused fewer than (an arbitrary) 1000 times. The program was then modified to record UDE/EP sets which had at least 2 unused EPs but again only if the set has an unused EP that has been unused fewer than 1000 times by UDE/EP sets which have at least 2 unused EPs. Etc. The aim was to arrive at high unused EP sets (27+) with as wide a spread of unused EPs between them as possible. Since a UDE/n-unused-EP set is almost always generated by sets with at least n-3 unused EPs, when UDEs with 20 unused EPs for example are being sought, UDE/EP sets with fewer than 17 unused EPs are dropped from the file. This keeps the file down to a number that does not slow processing too much.

Tuning the program

After more long runs, the program had produced 929 UDE/EP sets with 29 unused EPs and 66632 with 28 unused EPs.

To reduce the space needed to store UDEs, the program was modified to use 1 byte INTEGER variables to store UDEs. This (with hindsight all too obvious) improvement meant the program could handle up to 1.7 million UDE/EP sets.

Over 1.4 million UDE/EP sets with 27 unused EPs were found. However, that number of sets slowed processing to such an extent that recording of any more was prevented. The limiting factor had switched back to processing time.

The UDEs with 27, 28 and 29 unused EPs had all but 33 EPs unused between them. The not-unused EPs are all used by 8 or 15 substitutions except Entry Points 10 and 249, which are used by five substitutions.

After very extensive running when it appeared that no UDE with more than 29 unused EPs would appear, a single UDE with 30 unused EPs was generated. Subsequent running yielded no further UDEs with 30 unused EPs. This could be significant: previously, as soon as one UDE with a given number of unused EPs is generated, others with that number of unused EPs immediately start to be produced.

Generating UDEs with 30 or more unused EPs is like looking for a needle in a haystack. It seems to depend upon producing a 'magic' UDE with, say, 28 unused EPs which has the ability to generate one with 30. And of course producing that 'magic' UDE with 28 unused EPs relies upon producing one with 27 or 26 unused EPs that can produce it.

The UDE/EP sets that have been generated and stored on the file have been fully milked: they no longer produce new sets with 28 or more unused EPs. So further progress relies upon producing additional sets with 27 unused EPs. However, the program is already pretty much at the limit of the number of UDE/EP sets it can handle and the number of UDE/EP sets with 27 unused EP that could be generated would appear to be enormous: analysing 1000 UDE/EP sets with 27 unused EPs produces over 1000 new ones and the rate of production shows no sign of slowing.

Running was ceased.

February 2014

After a break from the whole thing, a solution to the running out of space problem was found.

Previously, UDE/EP sets were held in memory both as a 256 one digit array and as a 256 character field. In addition, a list of unused EPs was held for each. About 570 bytes in all for each UDE/EP. Remember, when the program had first been written in 2012 it was not known that storage space would be a limitation.

The program was changed to hold each UDE/EP set compressed into 20 four byte integers, reducing the memory required for each UDE/EP set from 570 to 80 bytes. Slightly more processing is required but it does not slow the program too much. But it does mean that several million sets can now he handled, though processing gets progressively lower as numbers increase.

The program was then run to look for UDE/EP sets with 27 or more unused EPs. As the gene pool of the 27s was no longer so restricted it would be interesting to see how many 27s would now be generated and whether any of them would generate significant numbers of new sets with 28 or more unused EPs.

There were now 7 with 30 unused EPs, 2,132 with 29, 162,118 with 28 and 3,682,898 with 27. The number of UDEs with 27 unused EPs being produced showed no sign of slowing. Despite the ability to hold and process larger numbers of UDEs, no UDEs with more than 30 unused EPs were found.

A lot of work had gone into the program logic to select UDE/EP sets which between them had as many EPs unused as possible. As a control, the previous version of the program - which did not have the logic for ensuring as many EPs as possible were unused - was run.

And it proved to be just as good at producing UDEs. It seemed that the other improvements to the program, particularly the ability to hold more UDE/EP sets and the ability (through various tweaks) to process more of them, yielded results just as good. So the code trying to ensure as many EPs as possible were unused was abandoned.

But either way, no more interesting UDEs - 29s and 30s - were being produced.

So in February 2014 running was stopped and the program began to fade in the memory.

October 2016 - two and a half years later

An interest in SpaceX led me to a video by a SpaceX guy - an Englishman - talking about array handling in Computational Fluid Dynamics simulations. Getting a computer to figure out how the combusting gases in a rocket engine will behave. What caught my attention was the neat array handling, and I recognised some of the challenges even though my little program was a mere gnat to his mammoth. But it did re-kindle my interest and made me wonder if any further improvement to my program was possible.

It took a day or two to re-familiarise myself with the program - it had been over two and a half years - but a couple of improvements now seemed obvious. One, now incorporated into the program description on the previous page, was to subset sets of unused EPs produced when analysing a single UDE. This reduced the number of such sets from thousands typically to a few dozen (and exceptionally up to 750). This sped processing up noticeably.

The second improvement was straightforward: when a new UDE/EP set is generated it must be compared to those already stored on the UDE file: the program would zip through looking for UDEs with the same number of EPs as the new one and see if the new one was a duplicate.

The improvement was, each time the UDE file is read into an array at the beginning of a superloop, to note where in the array UDE/EP sets with each number of EPs start, and how many of them there are. The search for duplicates DO loop now knows where to start and end. This again did speed things up, making one realise that a disproportionate amount of time had been going in this duplicate-search operation, so many of the other parts of the program having been tuned up.

These two changes made the program run two or three times faster. That made it realistic to start a run with a single 1 unused EP UDE and allow up to half a million UDE/EP sets with each number of unused EPs to be generated before EPs with the next higher number were to be looked for. E.G. when 500,000 1s had been produced, only new UDE/EP sets with 2 or more unused EP would be stored, and so on.

The program was also modified to allow processing from the end of the UDE file: rather than starting each superloop with UDE/EP sets with the most unused EPs, starting at the other end with those with the fewest is now possible.

A further idea was rather than try all combinations of substitutions for defining the 'hardcore' undefined functors, how about making a random selection of substitutions? This was a nice idea but proved not to be a good one: the existing program has quite a neat way of examining the chosen substitution for the first hardcore UF and, if it would use up too many unused EPs, not proceed to look at substitutions for any other hardcore UFs. For example, if the program is doing a run looking for UDEs with at least 8 unused EPs, and the substitution for the first hardcore UF already means there are fewer than 8 unused EPs left, there's no point in looking at the substitutions for the other UFs. This method works really well when there are under 10 of these 'hardcore' UFs. But there can be up to 50 of these UFs in some cases and 50 nested loops takes for ever. The program had started out with 36 nested loops (before it was known that there could be as many as 50 hardcore UFs), but was cut to 16 then to 9 to allow the program to run in a sensible time.

The random substitutions version looked at all the EPs used by all the hardcore UFs and then asked how many were left. This turns out to be much slower than the main version of the program.

Currently, the main version ignores new UDEs that have more than 9 of these pesky hardcore UFs. The random substitutions version of the program is run as a complement and it ONLY looks at situations where there are more than 10 UFs. No doubt a version that combines the merits of the original version and the random subs method could be written, but the original version is pretty good as it stands: it's been run on a batch of UDEs, then changed to do up to 16 UFs (rather than 9). Very few if any additional UDE/EP sets are produced. So, the random substitutions idea was of only marginal benefit.

Another idea was to sort the new UDE/EP sets as they are generated by constantly shuffling the array in which they are stored. The new UDEs would be sorted by the first of the 20 elements of the UDE within number of unused EPs. This makes looking to see if the latest UDE/EP set generated is the same as any previously generated in this superloop much faster. It also allows the main UDE file to be sorted by first element with number of unused EPs, making the hunt for duplicates vs old UDEs very much quicker. Unfortunately the time shuffling the array far outweighed the saving so the idea was abandoned.

However, it prompted another thought. Rather than sort new UDEs as they were generated, how about sorting them at the end of the superloop, just before the subsetting vs old UDE/EP sets? A sorted main UDE file makes seeing if a new UDE/EP set is a duplicate of one on the file much faster: a narrow block of old UDE/EP sets - with matching number of unused EPs and same first UDE element - to be compared against. This improvement was adopted, and this single improvement makes the program run about 3 times faster overall, and has the most effect when there is a large number of UDE/EP sets with a given number of unused EPs. Comparing a new UDE/EP set against a million old ones with the same number of EPs was a nightmare. Now, around 15% of that million have to be looked at. Clearly, fully sorting the UDEs could cut this down further and this might be one idea to try out at some future date.

Looking back to 2012 and the very first version of the program, it took several weeks to generate 100,000 UDE/EP sets and to get from a 1 unused EP UDE to UDEs with 20+.

The current (November 2016) version of the program can accomplish the same things in about 15 minutes. So the current version is getting on for 4 orders of magnitude faster than the original version. It is now possible to contemplate a run starting with one 1 unused EP UDE, generate a million UDE/EP sets with each number of unused EPs and see what happens at the top of the food chain. That would still take several days but would have taken months previously - and consequently would not have been attempted.

Conclusion and Further Enhancements

The program can handle up to 4, maybe 5 million UDE/EP sets. Typically in a run, when all the 1s have been analysed they are dropped from the file and so on, such that when we get to 26s only 26s, 25s and 24s will be held on the on file.

Back in 1972, memory on the IBM 1130 was so constrained that UDEs were held very economically. Something like every 1000th UDE was held in full, and for succeeding ones only "what's different" data was held. (Anyway, something like that if memory serves me correctly.) Very efficient space-wise. Often, only one or two of the 256 values will be different between successive UDEs. So that could be one interesting avenue to explore.

At the moment, when generating UDE/EP sets with 26 or fewer unused EPs, the number of sets grows faster than they can be analysed. As for the 27s, there might be a manageable number - a run to find out is contemplated. When it comes to 28s, their number is definitely relatively small - try as it may, the program runs out of puff when it's found fewer than 200,000 of them. Only 2176 UDE/EP sets with 29 unused EPs have been found. And, despite thrashing the 27s, 28s, 29s and the 30s themselves to within an inch of their lives, only 7 UDE/EP sets with 30 unused EPs have been found.

So far no limit seems to be in sight for the number of UDE/EP sets with 27 unused EPs. However, the number of UDE/EP sets with 28 unused EPs seems to be more limited. It looks like there may only be a few thousand with 29 unused EPs, and a handful of 30s.

The goal now has become to see if it is possible to produce a UDE with 31 unused EPs. That is ultimate the aim of all production runs now.

So, for the time being, it looks as if the number of first order universal decision elements of four argument places in four valued logic may be of the order of 430 or about 1018, a quintillion.

Final Enhancements

November 17th 2016.

Sorting UDEs on their first element worked well until a block of over 100,000 with the same first element value was produced, making the hunt for duplicates vs udefile take about 20 minutes. So, new UDEs (and thus the whole udefile) are now sorted on the first nine (of their 20) elements. The time taken to check if 30,000 new UDEs are duplicates of ones on udefile has fallen from 20 minutes to 1 second. Quite an improvement. The sort takes a minute or so, but is an excellent investment.

Another major change is to the combinations of substitutions logic. Previously only UDEs with 9 or fewer hardcore UFs were processed. Now, because the program runs so much faster, this has been expanded to 14. Furthermore, UDEs with more than 14 hardcore UFs are not ignored: a number of random selections of substitutions for UFs 15 and upwards are now tried, and within each of those random substitutions for UFs 15+, all combinations of substitutions for UFs 1 to 14 and processed. So if, for example, we choose to try say 3 random combinations, the heart of the program will now take 3 times longer than it used to. However, not all UDEs have more than 14 hardcore UFs. And the program now otherwise runs so much faster that this enhancement is not a problem performancewise.

Latest Results

As of November 25th 2016, the program has found UDEs with 32 unused entry points. (but see below)

There are      1040 UDEs with 32 unused entry points.
There are    63156 UDEs with 31 unused entry points.
There are 2424725 UDEs with 30 unused entry points.

Several long runs have been done on these UDEs in an attempt to produce more. Each run now produces very few new ones, and no additional 32s have been produced in any of the most recent runs.

Given the number of 32s and 31s it is extraordinary that no 33s have been produced. Whilst one cannot definitively say there are no 33s or above, it is tempting to draw that conclusion.

The program now goes at such a rate (maybe 100,000 times faster than the 2012 version) that it has become a joy to behold. The program can run with the array size at 12,000,000 - a number unthinkable until very recently.

We can definitely conclude that there are at least 432 first order universal decision elements of four argument places in four valued logic and we may tentatively suggest there may be no UDEs with more than 32 unused entry points.

A run is underway that will generate 8 million 1s, then start to generate 2s and above till there are 8 million 2s, then 3s and above, etc, with the aim of having a wealth of genetic material when the high end is reached. If this run does not improve upon the 1040 32s, that could be significant and may suggest 32s could indeed be the limit.

December 9th 2016. The above run at first seemed to have failed: it produced only 148 32s despite having 15 million 29s as genetic sources - 15 million being the most that can be accommodated. The run was almost abandoned. But in a last throw of the dice, the 29s were deleted and the program was allowed to generate fresh 29s, along with 30s, 31s and 32s. It quickly generated millions more 29s, and the 29s were capped at 6,000,000 with the intention of milking them and then ending the run.

But overnight the program delivered a surprise: six 33s. And a few hours later the 32s - of which there had previously been 1040 - suddenly started to multiply.

We can definitely conclude that there are at least 433 first order universal decision elements of four argument places in four valued logic. And experience has shown that every time it seems the pinnacle has been reached it is later surpassed as the program becomes more capable. So one is reluctant to conclude, even tentatively, that there are no further summits yet to be reached.

As a final experiment a new run was started, with just one UDE with 1 unused EP. Rather than starting at end of file - i.e. always processing the UDE with the fewest unused EPs, this was a start at start of file run. The subsetter was turned off, just to make the experiment go faster. With 3 hours the run produced 33s! And over 200 of them at that. The run achieved in 3 hours what the previous run had taken 10 days to do. At least it's becoming clear how to set up a run to get top-end UDEs and in a sensible amount of time. The subsetter was run to eliminate subsets amongst the top-end UDEs.

Running has now ceased. The final number of top-end UDEs and conclusions are as follows:

There are at least     228 UDEs with 33 unused entry points.
There are at least 10,658 UDEs with 32 unused entry points.
There are more than 433 first order universal decision elements of four argument places in four valued logic.

October 2018. It's been nearly two years since the program has an outing and the bug has bitten again so it's getting a run. A new idea seemed to make it worthwhile: generate, say, 6 million UDE with one unused entry point (1s), then only let the program generate 2s, and when there are 6 million of those, generate 3s (but no higher) etc. The idea being to preserve the DNA of all the 6 million 1s.

However, when generating 2s, millions were generetd that would have been not stored if 3s or higher were being recorded, because they would have been sunsets. So huge number were (unnecessarily) being recorded: anything they could generate could be generated from their superset.

So a modified approach was adopted and 15 million 1s were generated, and once generated, UDEs with +2 EPs were recorded: when 1s were being analysed, 3s+ were recorded. This cut down the absolute number of UDEs. Amazingly, a 1 EP UDE managed to generate a UDE with 15 unused EPs, and millions of 5s, 6s and 7s were genereted from 1s. Presumably, these 1s capable of generating higher EP UDEs are the ones with the mosr fecund genes.

In an alternative run, only UDEs with 4+ more EPs than the UDE being analysed were recorded: e.g. 5s+ when 1s were being analysed. This was a complete failure: the run stopped when all UDEs had been processed (5 randoms) and nothing higher than 2 16s had been produced! Perhaps higher-EP UDEs just don't produce new ones with +4 EPs. Unfortunatley the run was set up to deleted processed UDEs, so the run can't be restarted to see if the e.g. 16 would produce 17s, though it's certain they would have.

Two runs were made starting with the same file of 1,000,000 1 EP UDEs, 4 randoms on the 1s, only one random all the way up to 27, where more were used. Interestingly, both runs produced UDEs with 30+ (no 33s) but the two files contained no duplicates!

Final Final Enhancements

October and November 2018.

I'm not sure what prompted renewed interest, but I thought I'd give it another go. The program seemed dauntingly complicated and I wasn't sure I'd get to grips with it again - but it soon comes back.

Several improvements were made to make runs more controllable (such as the maximum number of UDEs with each number of EPs that may be generated in a run) and half a dozen performance improvements were made: the November 2018 edition runs two or three times faster than the 2016.

Better performance, and better control of runs, means more formulae can be crunched. Generating millions of UDEs with one EP and taking them in batches to see what they can do proved fruitful. And one day a 34 appeared. But after many many runs this appears to be the ceiling. Practically unlimited numbers of 31s can be generated, but the numbers of 32s and above seem to be limited.
The numbers produced are:
34s          96
33s     12275
32s   900470

So, there are more than 434 first order universal decisions elements of four argument places in 4-valued logic.

The Future


Universal Decision Elements In Four Valued Logic

Webpages written: May 9th 2012 - December 2018 (on and off)

Copyright M Harding Roberts 2012 - 2016
Contact




Privacy Policy and Cookies