July 14, 2020
This is a next level entirely. The taste is amazing with the cheese adding just the right crunch to overall the soft texture of both the pasta and the cauliflower. It feels much lighter than Brie. Usually, everything tasty must be well-fried, for the fats to scorch and provide the taste, but this is cheating.
June 16, 2020
Although the baconbon is an amusing hack, it is a very low-brow, bachelor level cooking excercise. A step up from it is baked brie. The taste is much better integrated.
June 15, 2020
Although never mentioned on my resume, I left MCST a little too early for my ticket to U.S.A., and so I spent a very short time working in a company that was doing a fault-tolerant application under FreeBSD. The idea was to use syscalls as indicators of the state, so that when 2 computers executed the same program, the system compared the syscall state and raised an alarm if a difference was found. It was implemented earlier by e.g. Stratus, but having 2 computers completely in lock-step, at clock level. A software layer on top of FreeBSD was a poverty spec implementation of the same idea. The system was supposed to support Yeltsin's nuclear suitcase, BTW.
A leading engineer on the project was Alexey "saw" Savochkin. He later quit programming, went to Italy, and got a Ph.D in some unrelated field, like History.
When I turned to emigration, I tried some random things at first, like applying to companies like Sun and SCO (it was long before Darl McBride's time). I even received a couple of confirmation postcards. That wasn't going anywhere, so I contacted one Mike Sheiner, who ran a team at Pads (now a division of Mentor) and placed an acquaintance from Butenko's halo. He replied to the tune of, "I am sure that you are a competent programmer, but to work at my company you must be an outstanding programmer".
At one point, I came upon a gentleman, who was assembling a team of contract engineers to live in a dorm in NYC. I knew it was shady, but I was sufficiently desperate to interview. His technology was called MUMPS, which could be described as an illegitimate child of COBOL and ADABAS. The interview question was to produce a formula to determine if a given month had 30 or 31 day (with the exception of February).
It was immediately obvious what he wanted to see. Imagine that you make a graph of day of the year for the ending of each month. These days pretty much all fall on a line, with 30-day months being a little under the average, and 31-day months a little over. All I needed was to identify the correct coefficients for the linear graph, then find how to construct an expression in MUMPS that produces 1 for negative numbers and 0 for positive ones, using some modulo and rounding trickery.
I think I passed that hurdle, but fortunately nothing came out of the project.
Next: Memoir 11.
June 10, 2020
The origin of my interest in open source is somewhat murky at this point. I only remember that I started looking at it early in my MCST tenure, when I bought my first PC, around 1992. At the time, BSDi was still a thing, as well as FreeBSD, NetBSD, OpenBSD, and a fairly embriotic Linux. I asked Vadim "~avg" Antonov which BSD was the best. His answer was "they all suck". He meant the infighting and splintering, not any technical issues, but it was enough for me to turn my attention to Linux.
Linux was still not even close to BSD at the time, in features or performance. But people flocked to it thanks to its GPL license and the free-for-all development. Moscow's foremost Linux man was Eugene "Crosser" Cherkashin. He created ifgate, a gateway between Usenet/e-mail and FIDO, and ran it out of his apartment. He shared out SLS 1.02 to me. I think it had a kernel 0.99_pl12 or something equally crazy. Booting off a kernel and root floppies, it installed by untar-ing the rest in a hard drive partititon.
Since I worked in a Sun shop, it took me no time to start looking into porting Linux to SPARC. David S. Miller was leading that effort, and I helped along a little. When DaveM moved on to support sparc64, he let me maintain the good old sparc(32), which was pretty fun. I did it until after I joined Red Hat in 2001.
While under DaveM, I committed the same error Mike Y. did with the over-designed emulator. SPARCstation didn't have a text mode display, so Linux console had to be rendered in software. To that end, I wrote a renderer of which I was rather proud. It allowed fonts of every width, such as 9 pixels. I think it was in the tree for a little while, but then DaveM replaced it with the console developed by Geert Uytterhoeven for Amiga. It was fractionally faster perhaps, but most importanly it was much simpler.
In 1995, I traveled to Helsinki in order to meet Linus. It was -15C and he arrived to our apointment wearing ear protectors. We chatted a little and visited his office, where he got into a direct contact with DaveM over talk(1), running at vger.rutgers.edu. It was a good time.
Linus disclosed to me in confidence that he was looking for a job in the U.S.. That really set my mind on the topic.
In these few short years, Linux gained SMP and networking, and became a respectable OS. As it happened, MCST tried to sell domestic SPARC to Russian military, and a question of OS came up. I proposed to use Linux, since among other things it was free as in beer. But our director, A.K. Kim, rejected my proposal on the grounds that he would be laughed out of top generals offices if he proposed an OS written by a Finnish student for Very Serious Defense Business. MCST went with buying a source license for Solaris for something like $200,000. Of course, soon after my departure they ended switching to Linux anyway. I merely was ahead of the time with it.
 One of the most dramatic example of over-design that went wrong is Subversion. As Bram puts it:
The simplest architectural problems to solve are the ones which for lack of a better theory most people ascribe to emotional or psychological problems. These are decisions for which there's no rational justification whatsoever. For example, writing a non-speed-critical program (which is most of them) in C or C++. A few years ago you could justify that because the other languages didn't have such extensive libraries, but today it's ludicrous. Another one is building one's protocol as a layer on top of webdav. And another one is building a transactional system for retrieving any subsection of any point in the history of an arbitrarily large file in constant time when that isn't part of project requirements. Yes, I'm making fun of subversion here. It's a great example of a project permanently crippled by dumb architectural decisions.
As an interesting footnote, Greg Stein got a job at Google on the back of that debacle. Google even declared Subversion a standard source control for a short time.
Next: Memoir 10.
June 09, 2020
I tasted the power of emulators for the first time in 1986. At the time, a possibility of porting MISS to PDP-11 was floated, but our group did not have the hardware. So, I started working on an emulator. It ran very slowly, taking about 200 host instructions per 1 target instruction. So, late Mikahil Flerov taught me how to use profiler, and the main offender turned out to be the virtual memory facility. But I'm not talking about an MMU here. Both the target PDP-11 and the Mitra-15/ES-1010 hosts were 16-bit systems, so it wasn't possible for the emulator just allocate a large array. Instead, a layer was used that paged parts of PDP-11 memory image. By profiling and some re-architecting, I managed to speed up the emulator so it only was 30-50 times slower than the real system. I also found paper tapes of CPU and RAM tests for PDP-11, and used those to catch bugs. The source of the emulator survived, although it was written in a long-forgotten language.
In 1992, the hyper-inflation turned my university salary into zero, so I left the glories of do-it-alone systems programming behind and I found a job at a company known as "Moscow Center for SPARC Technologies", or MCST. It was based at a storied establishment, Institute of Precision Mechanics and Computer Equipment (IPMCE) and basically employeed the scientists and engineers of the institute, who would otherwise go as hungry as I did.
A large chunk of MCST's business was to act as a research arm of Sun Microsystems, under the umbrella of SunLabs. IPMCE was known for making CPUs, so that's what we did for Sun.
My first real task was to help Michael Yaroslavtsev with a gate-level CPU emulator. Rather than just throwing together a few C++ classes, Mike developed a framework first. It included a domain-specific language, which described the CPU blocks according to the design of the hardware, and an interpreter with pre-translation for DSL.
The whole thing was running at a SPARCstation 1+ and basically was too slow to be useful for anything. Hardware engineers and architects were not happy with our efforts, even though it was them who asked for a clock-precise sim.
After the failure of that effort, I ended at a competing group, led by V. Tikhorsky and his right hand, Boris Fomichev (who later became known for STLport.org). The dynamic duo's emulator was roughly clock-precise, although its internal design in no way matched the CPU that it emulated. As a result, was was significantly faster, almost fast enuogh to boot an OS. They implemented what their customers really wanted, instead of what they said they wanted.
In 1996 I partook in yet another emulation effort for MCST, which probably was the coolest of them all. I didn't write the emulator though. The emulator was an actual Verilog runner, executing a real CPU model, which could otherwise be routed and taped out. The environment allowed certain hooks into C, which could drive or report CPU pins. So, a guy on the CPU design team wrote those hooks down to ioctl(2), and I wrote a driver for Solaris that managed a bi-directional parallel port on SPARCstation 10 (bpp). In order to do that, I reverse-engineered the bpp driver of Solaris first. A little FPGA board adapterd CPU socket to the parallel port with some buffers and latches. Then, we booted the Solaris on the Verilog, but using the actual hardware for all the DRAM and peripherals. The technique of "in-circuit emulator" wasn't particularly groundbreaking, but it was impressive nonethless. I touted that driver in my resume for years afterward.
 Flerov was colorful character. I still have his whitepaper about a new OS to replace both MISS and UNIX. Back then it was seen as perhaps ambitious, but not downright crazy idea. He killed himself in 1988.
Next: Memoir 9.
June 05, 2020
By about 1989 it became abundantly clear that MISS needed a packet switching network, and it provided me an opportunity to make the biggest mis-step in my career.
MISS already had a store-and-forward network. As a great example of convergent development it was surprisingly similar to IBM RSCS, a backbone of BitNET. When Butenko implemented a gateway to RSCS, the biggest issue was how MISS had 10-character identifiers and RSCS had 8-character ones. My username, ZAITCEV, was one of the lucky ones that could be used as-is. But either way, it cleary was a dead-end.
Still, this gatewaying experience clearly demonstrated that developing a MISS-specific network was a non-starter. We needed to adopt someone else's network. But which one?
Butenko himself got deeply involved into hacking on Apple Mac at the time, and Macs came with AppleTalk. It was a basic network with a rudimentary provision for routing, and it also featured Apple's equivalent of TCP, ADSP. It supported sharing of files and printers, but no virtual terminal. It ran over the serial ports driven bus and Ethernet.
Other competitors included TCP/IP, Novell IPX, NERPA, and DECnet. I had an opportunity to look into NERPA because Butenko got acquainted with N.V. Makarov-Zemlyanski, and we managed to get connected. NERPA was inspired by ARPAnet, but only implemented the network itself, without an internet. It provided a file transfer and remote terminal. It ran over point-to-point lines of course, which connected hosts and concentrators.
DECnet was something mocking X.25 and other WAN networks, mostly, although it managed to support Ethernet. It was not particularly well documented, and I didn't have an access to a reference implementation: the university had no VAX.
Novell's product was extremely popular at the time, so it was easy to find. However, it didn't seem well featured. There was no remote terminal that we needed. The documentation was somewhat vague. The main advantage of IPX was that it supported ARCnet, which was the only real LAN that I had available. Ethernet was much too expensive for me.
Microsoft's offerings LAN Manager and NetBIOS were so bad that I rejected them early on.
When I started investigating TCP/IP, I was somewhat overwhelmed by its scope and features. It was very obviously a better idea than the X.25 garbage, but even so it was a large suite and I sensed that I would not be able to implement it in any reasonable time frame. Also, TCP was the backbone of the system. Having just implemented the uucp g-protocol, I was apprehensive of an internet-capable virtual circuit protocol. But naked IP was almost completely useless.
In the end, I selected AppleTalk.
My thought process went along the lines of looking at not needing ADSP at first (for folder sharing), availability of quality documentation, as well as a reference implementation. My first medium was ARCnet, for which I borrowed the RFC-1201 framing, only with a new protocol ID number (later, I tried to reserve that number with a standards body). I also borrowed SLIP to transmit AppleTalk datagrams to ES-1011. Although my AppleTalk island had no real Apple in it, I looked into PC-compatible ISA cards with Zilog 8530 USART. They existed to connect PCs to Macs, so I had a hope for interoperability. What can I say, AppleTalk seemed like a good idea at the time.
Aside from betting on the wrong horse to begin with, my second biggest mistake was not realizing that the OS-level API was critically important for networking. If I went with TCP/IP, I would know the role of sockets, but I didn't. As a result, MISS never gained any network API and applications talking to the network relied on kludges that worked through an equivalent of ioctl(2).
 At about the same time, Makarov-Zemlyanski implemented MISS network in assembler of BESM-6. It permitted users of OS Dispak to exchange Ineternet e-mail globally through ES-1011 and my e-mail gateway.
 The name NERPA itself was a pun on Not-ARPA. Although, Nerpa was a kind of a freshwater seal, endemic to lake Baykal.
Next: Memoir 8.
June 04, 2020
While a university student, I held a side job at a small company "Micros" that developed a COM-port network. It was a somewhat common thing to do, not much different from the original AppleTalk in cocept.
Although AppleTalk was a bus with a CSMA/CA protocol, Micros was a token ring. A station plugged into a ring through a tap with relay driven by the DTR signal. Thus, if the COM port was not open or the PC was turned off, the station was electrically disconnected from the network. Not a surprising design for the years when people thought FDDI was a good idea.
Most of the software for the network was written in Modula-2 and stayed resident in MS-DOS, providing remote access to files. It was in the days when a PC AT with a 12 MHz CPU and 1 MB or RAM was the gold standard.
Come to think of it, I cannot remember what I ever accomplished at that place. I worked hard and was paid relatively well, but I just didn't do anything noteworthy. However, I met some interesting personalities.
Owner of the company, Mr. Andrey (?) Kinash, graduated from LesTech, or the Forest Technology Institute, a narrowly famous univirsity that was divided in half between the forest stuffs and the space and rocket science. Running a company during the final days of USSR wasn't a trivial undertaking.
Our team lead was my friend Anatoly Voronkov, who wrote most of the software. Other members were myself, Marat Shafigulin, and Vladimir Roganov. Vladimir wrote a rather interesting error correction code for us. At the speed of 115,200 baud, bytes arrive quickly enough that the unbuffered UART in PC dropped them when MS-DOS disabled CPU interrupts. Every time the 55ms timer hit, we lost something like 3 bytes. The challenge, thus, was to develop an erasure code that could recover not from corrupted bytes, but from lost ones. Althuogh, I think, our low-level framing provided some idea of how long the packet should be, the code could not know the position of the lost bytes. IIRC, Vladimir's code expanded the data stream by less than 10%.
All of that was rather nifty, but of course the arrival of cheap Ethernet made such networks obsolete.
One of the firm's locations was a basement that we called "Kinashevnik", for obvious reasons. It had rats and we generally got to know them, because everyone loved to work overnight. That was when I learned that a rat can gnaw through almost anything, including concrete. The only thing that could stop it was a metal plate. Anatoly then had a brilliant insight that we ought to plug rat holes not with something sturdy, but with something that was not in rats' taste. We found a type of packaging foam that apparently tasted disgusting enough to rats that they would not gnaw through it, and used it to plug rat tunnels.
Next: Memoir 7.
June 02, 2020
In a comment at Brickmuppet's blog, I brought up a morbidly funny case of someone getting ejected from a PiperSport aka SportCruiser. The NTSB speculated that he wanted an item from the hat rack, unbuckled his main belt, reached for it, and before he fastened the belt back, something in his harness snagged the canopy latch. As the canopy popped open, he applied a nose-down control input and got thrown out of the airplane, landing 1/3 of a mile from the main wreckage.
Seems like the Czechs cannot make a foolproof enough latch. However, someone else crashed a SportCruizer because the canopy was or became unlatched, then he unbuckled in order to reach the handle and re-latch it. This is a big no-no. Many people died under similar circumstances, in various types of airplanes. In one particularly gruesome case, in 1988 a woman crashed a Bonanza into a Phoenix backyard where a family was having a barbecue, killing a 10-year-old girl and 2 adults on the ground.
I had a door pop on me 3 times.
First time, I likely forgot to close it properly in an Arrow, so the upper latch didn't grab. I violated one of cardinal rules by taking off without having the landing assured — the cross-wind was too strong. I ended flying to another airport that had a runway better aligned with the wind. The draft inside was as strong as everyone was saying.
Second time, I am pretty sure the door was closed, but it popped open on takeoff somehow. It was during a checkout in a Skyhawk at a large commercial airport with long runways. I aborted the take-off and stopped. The check-out instructor agreed with me.
Third time, vibration made a handle to turn in my Carlson and the door popped open when I slipped for landing. The hinges were on top, so it made a tremendous slam against a wing, and I was certain it tore through the fabric. But no damage was found. I later modified the latch's claw for a more positive engagement.
Honestly I cannot see why people would try to close the door in flight. It's just a dumb way to die.
May 23, 2020
In 1983 my father acquired a computer for his lab, a clone of an 18-bit address PDP-11, called SM-4. At the outset, it had a ferrite core RAM of about 32KB, which clearly was inadequate. In a couple of years, he got a semiconductor RAM module of 128KB. It would be a great improvement, but it was unreliable. It was based on the KR565RU1 DRAM chip of rather poor quality. As the system ran, the DRAM warmed up and started to fail.
I helped Dad's underlings to fix it up. We would put a board on an extension so it hung outside of the chassis. Then we ran diagnostic tests. When the test failed, we administered just a drop of ethanol on the DRAM chips. If the test started to pass again, we knew the bad chip. I replaced a bunch of those by carefully cutting the legs off, then removing the legs one by one. I did not have a solder pump at the time, and it was safer to cut the bad chips off anyway. It was important not to damage the PCB.
IIRC, about 30% of the chips were bad. Of course, the same ratio among the spare ones were bad too. But eventually we prevailed and got the RAM module running well. It served until the SM-4 was decommissioned.
Next: Memoir 6.
May 21, 2020
For a few short years, the backbone of computer networking in late USSR was Internet e-mail transmitted over UUCP. To hook MISS into it, I needed so-called "g protocol". It provided basic flow control over a modem link with sliding windows. The counters were in packets, not bytes like in TCP, and up to 8 packets could be outstanding at a time before acked.
To be frank, it was a hell on Earth. Or almost, because it was fun after all, like climbing the Everest.
To begin, merely programming a sliding window protocol challenged my prowess. There's a million of corner cases, all the ack numbers had to be tracked, modulo-8 arithmetic comparisons were confusing. I struggled with basic correctness.
But the hell arrived when I faced interoperability. Our first counterpart at Demos was a Xenix of some kind. Its uucp descended from the earliest AT&T originals. I more or less made my implementation talk with that. But one day, Demos switched us over to a VAX with BSD 4.1. That used an entirely different implementation, known as "pk". My code flat out didn't work with it, and it took a huge effort to fix it up. The final implementation that I had to support was UUPC, an MS-DOS program. By the time I had to talk to it, my code was tightened up considerably and UUPC only took some slight tweaks.
The amount of work it took to get g-protocol robust has framed my understanding of communicatio protocols for many years. It continues to amaze me that TCP even works at all, considering the diversity of networks that it must traverse, and above all the multitude of implementations. TCP is significantly more complex than g-protocol, even before we talk auxiliaries such as PMTU Discovery or extensions such as SACK. I didn't have to deal with sizing of the window and the Slow Start.
Unfortunately, they source code of my g-protocol implementation was lost when I changed jobs to MCST and didn't think to take it along.
Update: Michael Y. prompted me to date the events above. I happened to keep a printout of an e-mail, sent to avg at hq.demos.su in August 1990. That is when the g-protocol in MISS was complete enough that I started to look at RFC-822, and wanted a sample. But the work on g-protocol started back when .su TLD did not yet exist. I distinctly remember seeing e-mails addressed to ussr.com. So, a late 1989, I suppose.
 Aside of the g-protocol itself, I needed a few other things: a Hayes-compatible modem, and an ability to control it. It was not a trivial task to find hard currency in order to buy the 1200 baud modem, while working in a univirsity in a collapsing country. In addition, Mitra-225 cum ES-1011, as well as MISS had some trouble talking to the modem. The "channel" was barely capable of supporting full-dublex operations at 1200 baud with linked channel programs. Interrupts were expensive! The OS did not have any way to buffer input either. I ended writing a kernel driver that de-framed g-protocol and buffered 1 or 2 packets at a time. This allowed the userland to receive packets with MISS' equivalent of ioctl(2). All this was small potatoes, however.
 It was the famous kremvax.demos.su. My own node was r11740.phys.msu.su. R11 was yet another alias of Videoton's clone of Mitra, and 740 was the serial number of the CPU.
Next: Memoir 5.
May 20, 2020
My LSI-11 needed a console, attached over a current loop. I found a small Polish terminal that was suitable. Unfortunately, it was set to display KOI-7 encoding, and UNIX needed ASCII with lowercase letters. So, I set to re-program it. I opened the case, and the first surprise was that the case was made out of plywood. It was polished and painted from the outside, so I never suspected anything!
The firmware and the fonts were stored in a UV PROM. It was the kind of ROM that you needed to erase with ultraviolet light. Some people even erased them with direct sunlight. Through my connections, I found someone who had a programmer for it on the other side of town. Then, I read the ROM, found where the fonts were, and copied them. Then, I designed lowercase letters. I mapped them in a text file with asterisks, and then read that file with a simple program in C. The complete font is preserved in this picture (in an X bitmap, actually):
Once I packed the font back into the firmware, I went to the gentleman with the programmer and wrote it into the chip. The operation was a success, although ultimately led to nothing, even though the font had a last hurray later.
Next: Memoir 4.
May 19, 2020
My first PC was a Soviet LSI-11 compatible, based on a DVK-3 motherboard in an Electronika-60 chassis with a transformer-based power supply. It had 256KB of RAM. I managed to find a controller of MFM winchester and a Seagate ST-20 20MB hard drive. So, it was capable of running UNIX.
I went to Demos and got a distributive of their UNIX. It was based on v7, but with many features of BSD 2.9 grafted in. Unfortunately, the bootstrap procedure crashed on my machine.
I cannot remember how I did it, but I identified the problem: the DVK-3 CPU did not implement the MFPS instruction, but the kernel identified it as a certain model of PDP-11, and assumed that it had to have the MFPS. I might have used my low-level debugger for it.
By that time Demos already was phasing out the obsolete stuff and there was no chance to get a new distributive composed for a free-loader. In order to fix the crash, I went a long way. Using Mark Vengerov's C under MISS on ES-1011, I ported ld(1). Then, I used Vadim "avg" Antonov's portable assembler (it was an unbelievable luck that he wrote that in the first place; his objective was speed - it worked many times faster than original UNIX v7 assembler written in assembly). Finally, I literally ported K&R C under MISS, with the output into avg's assembler and the aforementioned ld(1). It turned out that the compiler was absolutely full of endian assumptions, and Mitra-225/ES-1011 was big-endian. I fixed them all. And finally, I used the toolchain that I built to cross-compile the DEMOS' bootstrap loader and kernel. Then, I wrote it to a distribution floppy.
It all worked in the end. Now that I'm writing all this, it boggles the mind that I was able to pull it off. I was too young to know what was impossible back then.
UNIX in 256KB sucked though. And it was entirely useless. I was very happy to replace it with an IBM PC in a year or two.
Next: Memoir 3.
May 17, 2020
Some time around 1988 I wanted to port UNIX to ES-1011, a hungarian clone of Mitra-225. It was a 16-bit+ mini with an accumulator-based instruction set, and unlike its contemporary PDP-11, it had no hardware-supported stack.
Our ES-1011 ran OS MISS, written by late Vladimir Butenko, including a C compiler by Mark Venguerov. Mark found it somewhat annoying implementing a stack. His prologue routine had 22 instructions and epilogue had something like 9. The K&R C also had a non-trivial prologue function, called '.jsav' IIRC. But it wasn't as bad as having a word in memory work as a stack pointer on Mitra-225/ES-1011.
Because entering functions properly was so expensive, Mark made a decision not to make all functions recursive by default. Instead, those that needed to be recursive were marked with a reserved word "recursive". It was expected that not many needed it. The problem was how to find those that did and mark them. I didn't do that, but I think someone developed a program that searched for potential call loops (perhaps Ivan Bobrov or Sasha Ovsyankin did). Even so it was rather iffy to identify and tag recursion.
Before I started porting, I consulted with Vadim Antonov, who was basically King of Unix in USSR. His suggestion was to start with developing the ABI and calling convention. I don't remember if he pointed it out, or if I decided it myself, but Mark's explicit recursion was a non-starter. I began looking into the problem and had an idea.
Mitra-225 had an additional register, L, which was possible to read in userspace. I think it pointed at the word where the return address was saved. The instruction set featured a few rarely used addressing modes with L as an index register. Also, a procedure return instruction set L as a side effect, as I recall. Either way, I decided to use L as a frame pointer. I coded a somewhat unconventional prologue that recovered the return address and previous value of L, then re-set L so it continued to point into stack, and pushed return address there. It only took something like 9 instructions. The epilog was even smaller and it was basically RET. The code was compact and performant enough to allow all C functions to be recursive, as they should.
When I demonstrated the code to Mark, he looked at me for a long moment and said: "Where have you been before, Pete?"
By that time, he was busy with his 2nd C compiler, for 8086, and he never got back to retool the C for Mitra. And I burned out of porting UNIX for one reason or other. My diploma theses was porting MISS to PDP-11, ironically enough. So, nothing came out of my craftiness.
Next: Memoir 2.
Update: There's a manual for Mitra-15 floating on the Internet, so I used it to refresh my memory somewhat. The L register is loaded from a global "section table" upon a procedure call (CLS: Call Section). This hardware support and traditional use make a reason why nobody came upon the obvious. But the procedure return (RTS) is recursion-friendly: it loads the return address from L+0 and the L from L+2.
April 10, 2020
The pecan got planted in the autumn, but its leaves turned nasty brown and fell off, so I gave it up for dead. However, my wife detected flexibility in its stem (which has not yet upgraded to a trunk) and suggested that it may yet come back. And it did! Stupid dedicious trees!
I took pictures of it with dying leaves, but it didn't look good on the background of dead grass. In fact it was next to impossible to tell what one was looking at. So, this time I provided an artificial background.
March 23, 2020
In response to a viewer's question, what pistol caliber is set to follow the .40 S&W into the dustbin of history, Jon Patton of The Gun Collective suggested .380 ACP today. Quoting:
It will take a while, but I think with the rise of the 9, 380 will start to fade away like .32 ACP and the .25 ACP did.
Well, I think it's a very risky bet Jon is making. What we're having currently are two trends: what he calls "the rise of The Nine" and also the buying public refusing to listen to the gun elite, and continuing to prefer the .380.
The former trend expresses itself in new and cool guns not being offered in .380, only in 9mm. SIG P365 and Glock 43X are the main examples of this. There is no Glock 42X and never will be. Probably.
The latter trend has started in 2003, when Kel-Tec introduced P-3AT and continued until the present. The industry steadily introduced new products in .380, such as Ruger LCP (2008), Glock 42 (2014), Browning 1911-380 (2015), Ruger LCP II (2016), S&W M&P Shield 380EZ (2018), and SCCY CPX-3 (2019). Even Beretta, who stopped making Model 84 relented and re-started the production!
As the two trends battle it out, the fate of .380 hangs upon an introduction of a small gun with 10 or 11 round capacity. It must be well designed and be of high quality. And it likely needs to be made by a top-tier brand, and be striker fired (until the LCP II appeared, this last point was a must, but I'm not as sure anymore). If this gun does not appear before 2023 at the latest, .380 is finished. If it does appear, it's going to let .380 catch up with the 9 yet again.
Unfortunately, if SCCY take the frame of CPX-3 and drop DVG-1 firing system in, the result will not be enough. It has to be SIG, Glock, Ruger, S&W, or Walther. Chances of that happening are not great, but not zero.
I think in a large part a company making these decisions depends on new shooters. Many commenters seem to think that the key feature of .380 that enabled its raise is the small size of the guns. It may have been so, but with P365 existing, it no longer is a factor. However, the lower recoil still remains. The .380 allows to make a small gun that is comfortable to shoot for beginners. That is why 380EZ came to exist. If a shrewd executive agrees with this line of reasoning, anything is possible.
P.P.S. Remington RM380 was announced, delayed, and shipped in 2015.
P.P.P.S. I downloaded the 2019 Beretta catalogue, and the Series 80 Cheetah pistols are not in it. Apparently Beretta made one last production run of the Model 84 at the peak of the popularity of the .380 around 2015-2016, then canceled Cheetah again.
P.P.P.P.S. A post at SIGtalk about the P365 in other calibers made me look at ATF reports. And so, the fraction of .380 automatics produced and sold was:
Year - 9mm - 380
2019 - 57% - 15%
2018 - 54% - 20%
2017 - 48% - 23%
2016 - 48% - 24%
2015 - 43% - 23%
2014 - 35% - 24%
2013 - 38% - 19%
2012 - 35% - 17%
In other words, .380 peaked in 2014-2016 and was in decline since (before the 2020 pandemic).
January 16, 2020
November 10, 2019
Not sure of this is something known in America, but I learned from the couple set from Tendai Senshi Sanred (Astro Fighter Sunred). When Vamp suddenly comes to visit, Sunred hides his half of the set, because he feels embarrassed to show such a proof of close relationship.
September 20, 2019
The pecan is growing and about ready to be planted.
Update: Pecan 3.
September 13, 2019
The above is called a "plumcot", and it is a hybird of plum and apricot. I think it makes way more sense than a cronut.
30 queries taking 0.0472 seconds, 75 records returned.
Powered by Minx 1.1.6c-pink.