data wrangler
1949 stories
·
38 followers

Dragons on the Moon

1 Share
Dragons on the Moon

Moonbound (02024), the latest novel by Robin Sloan, takes you on a journey. It begins with the invention of faster-than-light space travel, the creation of artificially intelligent dragons by futuristic humans, and a war between humans and dragons waged in the skies and on the moon, followed by ten thousand years of darkness. That’s just the prologue of the story — four pages of set-up before the fun really begins.

Sloan’s first novel, Mr. Penumbra’s 24-Hour Bookstore (02012), was a captivating oddity:  a story that started in the familiar context of tech boom era San Francisco and ventured into the weirder and more elusive territories of  secret societies and cultic mysteries. Moonbound pulls off a similar trick on a grander scale, taking the trappings of classic fantasy and science fiction stories through the ages and using them to tell a thrilling coming-of-age story that combines far-seeing, heady speculative fiction with punchy, propulsive adventure.

It’s also chock-full of ideas on long-term thinking — and references to The Long Now Foundation’s work. Sloan has long been a supporter of Long Now (Member number #127) — he even spoke at the 02010 Long Conversation. We were excited to get the chance to catch up with Sloan and discuss the science and stories that informed Moonbound, his writing process, and how beavers might debate each other. 

Dragons on the Moon
Moonbound's cover. Illustration by Na Kim. Courtesy of Robin Sloan.

The following conversation has been edited for length and clarity.

Long Now Foundation: How would you describe Moonbound for those who have not had the chance and privilege to read it? 

Robin Sloan: I'm notoriously bad at describing my own books. Maybe that's common. The book is its own description. If you could describe it easily, you wouldn't bother writing it. 

I can go at describing Moonbound two ways. One is a bit meta: this is my attempt to put a book up on a shelf that I have drawn from my entire life, but especially when I was a young reader. For me personally, that shelf has books like The Chronicles of Narnia, the Books of EarthSea, by Ursula K Le Guin, The Chronicles of Prydain by Lloyd Alexander, the adventure stories of Rosemary Sutcliffe, as well as certain Studio Ghibli movies. At a certain point you realize that you've benefited so much from a certain tradition or set of traditions and the impulse grows to pay back into the bank and try to put another little brick in the wall for other people to enjoy in years — and hopefully generations — to come. So that's how I want the book to work in the world.

But: why write this book for that shelf? And the answer is just that, of course, these are the things I'm interested in and these things are as diverse as the Arthurian mythos and all the fun tropes and tools of fantasy along with the sort of large scale imagination that science fiction demands. And I tried to mash those things together and make something that was fun and interesting to read.

There’s a lot about scale in the book in a way that's very near and dear to Long Now. 

Of course! To hear you react to that — maybe, I should get better at saying this is an adventure about scale. It is close to the heart of the book.

The potted description on the website that you give is “The year is 13777. There are dragons on the moon,” which immediately hits you with both temporal and spatial scale in a very dramatic way. That 11,000 year gap between the present day and where this book settles is quite vast. On that scale question: what was appealing about having a gap of that magnitude built into the novel?

That is a good identification. Most people would agree — readers of science fiction along with writers — that there is this real phase change. 50 years in the future, a hundred years in the future — that’s one set of questions and challenges. When you go into the thousands, certainly the multi-thousands, it's a different imaginative challenge. I very specifically wanted to take that on.

Long Now was an influence on me. I've been attending the lecture series in San Francisco from its beginning in the early two thousands. Those were a powerful influence on me — both the lectures and the whole line of thinking, going out to five digits. It's important that it's 10,000 years. The extra digit just destabilizes everything. When you get to that range, it knocks you out of your consensus, ready-made futures. It zooms past our Star Trek imagination. You have to answer the question: Well, okay, what else? And then, the fun begins.

What was really striking was how you managed to cultivate ideas in this story that have kinships with those of the present day, but still had obviously very deep divergences. On your preview site for the book, you wrote that an essay called “The Widening Aperture” about the widening spatiality of playing Final Fantasy II and seeing the map expand from this small medieval kingdom into a whole wider world where you can fly around and see a whole mix of futuristic and non-futuristic elements. You compare it directly to the experience of sitting in a dark auditorium in San Francisco for a Long Now talk, and having your temporal perspective expanded. That's also a feeling that I had when I was reading the book. There were moments in the book where you intentionally created those zoom out moments of “Oh, there's more to this.”

Yes, yes. It's a fun thing to do as you develop as a writer. I've been writing fiction for many years now, and I've been able to see myself get better and more capable. In the early days when you're first starting out, you're just trying to write something that's comprehensible and readable. As you get further along, you access that way of working where you actually can say, I want to produce a very particular feeling — like those bumps in scale. There's sort of a discontinuity to them as well, which I think is helpful. It's not a completely elastic thing.

Not only am I happy with how it worked in Moonbound — with the scale that this book ends up at — but the vision is for a series and we’ve got more scale to go. My hope is that this one finds a substantial enough readership that it will merit at least two more books in the series. The vision for the second book is to go from the macro regional to the truly planetary, and we will find out the fate of all of Earth in this weird time period. And then the third one, obviously we’ve got to go to space. We’ve got to go to the stars and see what operates and what kind of stories are going to unfold on the very largest scale. I would love to do that. That just sounds like a fun challenge and a fun set of feelings to produce.

There's so much in this book on that planetary scale, whether it's the networked intelligence of a robot pilgrim that's a pretty core character or other notes about different animal intelligences and societies with distinct ways of seeking knowledge. A lot of that felt very — not ripped out of the headlines, but — influenced by recent philosophy of biology and a lot of other work on planetary intelligence and planetary reckoning. A few months ago, I talked to Nils Gilman and Jonathan Blake of the Berggruen Institute, who recently wrote a book about planetary governance. What work was influencing your thinking? In terms of science fiction, there's always an element that is borrowed from the science and the technology that is happening in the present day.

I would say it's a mix of the sort of general osmosis of the things we know now, some specific references, and some personal experience. There's lots to choose from in terms of the things that humans are coming online to in terms of a more manifold understanding of kinds of life. For me — I think readers of my previous novel Sourdough (02017) will get this very clearly — microbial life is a big deal, both in the world and in our own bodies. At this point I feel like the science is pretty settled. We are committees and the vast community of empires rising and falling in our guts are affecting our mood and what we want every day. We're in conversation with them. There’s an obvious sort of connection between that and the fungal narrator in Moonbound and the narrator's relationship to the protagonist, Ariel. I will say very specifically, there's a book that inspired me called Ways of Being. It's by James Bridle, who is such a wonderful thinker who has cut a very interesting path through publishing and art. Ways of Being is great, highly recommended in part because he does such a great job of casting a really wide net and synthesizing other people's findings and research and stuff. And with every chapter there are nine science fiction stories you could go and write just based on the chapter.

But to your point: what’s actually exciting is that it's not a list of two potential influences. It's a vast web and the last few decades have seen science and scholarship and popular culture coming online to these different ways of being on this planet. It's really good. There are a lot of things in our culture that are not healthy and going in the wrong direction, but I think that one is actually very healthy, and is ultimately a necessary development.

One thing that shows up a little bit in Bridle — less about biology and more about modern life and the way it's changing — that I have just been really conscious of myself, is something that connects to Clovis, my robot pilgrim character. And that’s this question: Are we not in more than one place at once now? Right now, I'm in your room and you're in my room, right? The singular eye of life and experience used to not admit these kinds of connections — this sort of spooky action at a distance. Now, most of us walk around with a phone and that phone is constantly spider-webbing us out to — I mean, who knows? For every person, it's different family and news and weird fascinations. It does actually demand a reconsideration of our points of view. In Moonbound, there's a line where the narrator is learning about Clovis, who has multiple awarenesses around the world. The narrator says, actually, the Anth [the near future human society in Moonbound] did this too. People did this too — they just never did it as gracefully as Clovis did. 

On that note: the perspective of the chronicler at the heart of Moonbound is very interesting. The line that was striking to me, and is quite Sourdough-esque, is “I live by the logic of yeast and that logic is multiplicity.” Later in the story, there are entities that are what we would call artificial intelligences and humans trying to understand them that exist and think on the scale of millions of dimensions. That felt very influenced by what's happening technologically and what people are talking about culturally around artificial intelligence. So could you tell me a little more about that influence on this novel?

It's an interesting contrast. The planetary and interspecies multiplicities element: that was something in the air for the last 10 years. The AI side of it, the high dimensional spaces where thought is suspended in this weird, unimaginable, but very potent space, is a direct result of my preoccupation and experimentation with AI language models. Hilariously, this was before they got powerful and popular. I timed it wrong! 

From about 02016 until about 02021, I spent a lot of time tinkering with that stuff. By temperament and constitution, I'm a technical tinkerer with all sorts of things. This was so far before ChatGPT — it was much more of a toy. But even in the early days it had a real poetry to it. Some of the text it could generate was weird — I couldn't write that, those strange computational dreams. That drew me into that world. 

I was going to try to use some AI tools that I built to write a novel or part of a novel. That did not work out. It wasn't actually very much fun and the result wasn't good enough. It wasn't what I wanted to put my name on and share with the world, but the mechanisms, the math, the theory of how these things work was super provocative — just really just interesting to think about. I put that stuff straight into this book. 

The discussions in the book of high dimensional spaces and the way that these systems take information and knowledge and hack into it in this weird way that's in fact all real. That's not fantasy, it's not science fiction, it is how they work. One of my dreams for the book is that a person reads it — maybe someone on the younger side, a high school student or early college student — and a year later they take their first machine learning course at Michigan State University, or whatever, and they go, “I'll be damned. I already understand that.” Because they read it! They learned from what the scholars at the College of Wyrd were doing in Moonbound.

There were so many moments in the book where, if I had read them when I was 12 or 15, I would then, in the next half decade after that, have had so many points of connection in biology class, in a computer science class, and so on.

That’s what I hope is the multichannel appeal of the work. You're trying to reach a lot of people in a lot of different ways, of course. But the readers that I really hope connect with the book are those precocious, curious, 13 to 15 year olds who are ready to have their minds opened.

There is a lot about biology and biotechnology and artificial intelligence in the book, but the most striking technology in the story is story itself, the power of narrative as a sort of cultural technology. Tell me more about the sort of role of myth and storytelling in itself as part of Moonbound.

In an odd way, that's the part that I have the fewest cogent things to say about, even though it actually is the most important. When I sat down to start this book in particular, I was so excited to have the opportunity to play with — I think of it as the cultural keyboard. There are these keys of these archetypes and these symbolic patterns. It’s not just King Arthur — it's deeper stuff. It's quests and swords and wizards and all of that. 

Anyone who reads my previous two novels sees very clearly that they're written by someone who really likes fantasy and science fiction. But the novels themselves almost encase that in a buffer of our real world and the contemporary, and they provide other pleasures. They're good. For this one, I thought, I get to finally play the keyboard and actually activate those symbols. I don't have a deep theory for why this is so compelling — I have no great academic paper I'm waiting to write on the power and resonance of myth through culture. It's there, though. It's obviously there, and it's so much fun. It's fun in the same way that people who get the job to write Spider-Man comics or Batman comics have so much fun. You get to play with the good toys, you know? It doesn't have to be corporate property, though. It can be these wonderful symbols and archetypes that are just free for everybody.

You’re working in a lineage here.  Actually, you're working, as all fiction writers are, in multiple lineages. You've got the Arthurian lineage very clearly in the story, but also the heady sci-fi lineage of Le Guin and more modern writers like Becky Chambers and Ada Palmer. How do you balance that with writing something that is quite, in fact, novel and new as a story?

Part of it that unlocked it for me was that I have come to understand the influences of my influences better. To put one example out there: I recently reread C.S. Lewis and his thoughts about writing and the writers that he loved. Those are writers like George McDonald, who is quite forgotten, and William Morris, who's not forgotten, but his weird fairy romance epics are not very heavily read anymore. It was cool to read those and see clearly how C.S. Lewis was the transmitter. He absorbed some of the best parts of those writers. He made them new at his time, which is obviously not new anymore, but it was enough to put some spin on the ball and keep it going for a century. 

It’s a little presumptuous to claim a spot in any lineage that illustrious, but that’s the attempt. I want to put some more spin on the ball, renew it in ways that make it accessible and interesting to new readers and keep it going. That's part of recognizing a lineage and stepping up to be part of one. It's not just about homage and repetition, it's about renewal. It has to be about renewal too, and then you just try your best and see what happens.

One example from William Morris. It was hard to read, but some of his books have such delicious images, some of which C.S. Lewis just lifted. One of his long books is called The Well at the World's End and involves this long, long journey to a magic well on a cliff facing the ocean. And that's where I got my Wyrm of Wyrd and the strange diving pool on the cliffs. So people will still enjoy this image, even if they don't know the first time they read it, that that's where it's coming from.

That's such a great little fact. You can see sort of this genealogy where these influential writers were also being influenced themselves. So you can see this chain going all the way back—

—to whoever is the first, and it's really energizing. It makes you understand that the ones you love were just readers too. They were engaged in exactly the same practice of reading something and thinking, “oh, man, that rules, that's awesome.” Then you just want to make more of it somehow, or you want to reflect it. It fills you up so much that there's an overflow and that spills out into this new thing.

What is your drafting process? 

The drafting process for this one was similar to my previous book, Sourdough. I have a very diligent and disciplined note-taking practice. I have many other weaknesses as a writer, but I think one of my Olympic-caliber strengths is being disciplined about capturing interesting thoughts and ideas I come across. It's a mix of little bits of science stories that I encounter, things I overhear people saying, and things that occur to me when I’m doing something else that I dictate into a voice note and send it. I capture all that stuff and I collect it all into one big stew pot. It’s a really productive process.

When I sit down to begin things, I just marinate in my own stew for a while. It'll be a couple of weeks and my task at that time is to go through those notes of all those things that caught my eye at some point. As you spend time with them, you start to gather things together and you start to see themes emerge or clumps. There are characters in here that are three different notes that sort of found each other and I put them together like a little Lego set, tick, tick, tick. 

So in some ways it's organic, but in other ways it's quite systematic. It's definitely not that I just look out the window and hope that something occurs to me. It’s all of these pre-gathered riches and weird thoughts that I shape together somehow.

The beautiful thing is there's a million ways to do it. There's not just one recipe. It works across genres too, fiction and nonfiction. If I ever give a writer advice, I tell them: however many notes you're taking now, you probably should be taking more and you should be a little more liberal with them. I do have sort of a persnickety thought, though. Some people I know are big note takers, but they want their notes to fit into this very orderly crystal palace of thought. I don't know if that is where good stuff comes from. It's not about building a perfect database of everything linked in the right way. That's not how thought really works.

On a tangentially related note — there's a very direct Long Now reference in the book, in the form of a debate format used by a firm of carbon accounting beavers.

Naturally, right?

In those debates, each speaker must agree with their opponent's reconstruction of their argument before the debate can proceed, which is something that is part of the Long Now debate format. What felt right about incorporating that in the storm of references and in the storm of ideas there — they're very interesting beavers, all named after naturalists. 

There are two levels to that.

One is that I went to two of the Long Now format debates back in the early days of the seminars, and I was astounded. I loved it. I had never even contemplated anything else like it. As the years went on, I only loved it even more, because you see a Long Now debate, and then you see another debate on TV and you're like, “oh no, this is terrible.” It was a profound thing to encounter. So I do believe it was inevitable that specific concept was going to make it into a story at some point. You can't have something that prominent in your brain and not have it get stamped out at some point. 

💡
Intrigued by the Long Now debate concept? Watch this debate on Nuclear Power, Climate Change, and the Next 10,000 Years between Ralph Cavanagh and Peter Schwartz from 02006.

I would like to know if the 3D format of the debates appeared at the same time, or if those were separate somehow. I'm quite proud of that. Not only that they used the Long Now debate format, but that they do it with sculptures because beavers, of course, are fundamentally three dimensional and sculptural thinkers more than they would be linguistic thinkers. But together, it's great. I don't think anybody has ever written a scene quite like that before. That's a fun feeling to have — you put something weird and kind of new on the page.

There was a beautiful tension where the human characters who encounter it sort of get it, but then just bounce up against it and don't fully succeed in understanding this fundamentally non-human way of thinking about the problem.

There's a line later in the book where the chronicler says: “That's a dangerous technology.” It's not all just happy, rational smiles and handshakes to meet the requirement to truly and powerfully state your opponent's position in a compelling, strong way, in what some call the steelman. Because: what if it's so good that you suddenly believe it? I think that fear, that nervousness actually underlies the reaction to the Long Now debate format, and even other debate formats. I think a shadow of that fear underlies a lot of people's engagement with different debates in the real world. I really do.

A core question in the book, and in the text a core question of this age of humanity is: What happens next? What felt so potent about that as a question motivating a lot of this work?

That question is repeated several times in the text, and the chronicler owns up to saying that it is the great question etched into the core of their being, of their construction. In a lot of ways that is just me speaking straightforwardly. For me as — not even as a writer — as a person, that question has always been completely captivating and urgent. That’s driven my interest in science fiction, but also in the news: I am interested in what happens next. I want to know how it plays out. We have different crises and questions and uncertainties and instabilities in the world across all these dimensions, and I want to know what happens next.

I don't actually know to what degree that's universal. I think there's plenty of people,  whole cultural fabrics, that are frankly less interested in that, or even actively uninterested, which is fine. I don't intend “what happens next?” as having a ton of inherent value to it. However, I can't deny that for me, it's constitutional. If you're going to write something that means something, you gotta put your own most urgent questions into it. And for me, that's one of them. It's one of the questions that's stenciled under my heart,  so I have to put it in the book.

Well, what’s your own “What happens next?” What are you working on now that you finished this massive thing? It sounds like you've already got grand intentions for this series.

That is the great hope. It's not a foregone conclusion in the sense that if there's not an audience for more Moonbound, then I won't foist it upon anyone. But assuming that this book finds a fair readership, I've got grand visions. You should see the notes I've made and retrieved for books two and three of this saga, because there's a lot going on. But it is a book about scale in a very Long Now way. It’s about thinking about scale and how challenging yourself to think on larger scales in time and space is a very healthy thing: intellectually, politically, morally — probably good for your blood pressure, too. I would love to continue to both encourage and challenge people who read this, who maybe are not Long Now members who are already marinated in this kind of thinking, to join the fun.

Read the whole story
digdoug
14 days ago
reply
Louisville, KY
Share this story
Delete

Artifacting

1 Comment

Today in Tedium: I fully admit it—I stretch images. I also intentionally wash images out, remove as many colors as possible, and save the images in formats that actively degrade the final result. This is a crime against imagery on the internet, an active ignoring of the integrity of the original picture, but to me, I kind of see it as having some artistic advantages. Degradation, you see, is a tenet of the modern internet, something that images have to do to flow through the wires more quickly. Gradually, the wires got fast enough that nearly any still image could be delivered through them in a reasonable amount of time. But the artifacts still matter. The degradation still matters. The JPEG was the puzzle piece that made the visual internet work. With that in mind, today’s Tedium considers how the JPEG came to life. — Ernie @ Tedium

Today’s GIF comes from a Computer Chronicles episode on file compression. Enjoy, nerds.


TLDR

Want a byte-sized version of Hacker News? Try TLDR’s free daily newsletter.

TLDR covers the most interesting tech, science, and coding news in just 5 minutes.

No sports, politics, or weather.

Subscribe for free!


We are going to display this image of a forest at a variety of quality settings. At 100% at 1,200 pixels wide, it is almost a full megabyte. (Entire series by Irina Iriser/Unsplash)

The GIF was a de facto standard. The JPEG was an actual one

I always thought it disappointing that the one time Steve Wilhite truly addressed his audience of admirers in the modern day, he attempted to explain how the file format he invented was pronounced. And it didn’t go over particularly well.

I remember it well. Back in 2013, when he claimed it was pronounced with a soft-G, like the brand of peanut butter. I posted about the quote on ShortFormBlog, and the quote got nearly 5,000 “notes” on Tumblr. Many commenters felt steamed that this random guy emerged after a quarter-century to tell them how their word was supposed to be pronounced. I’m convinced this post unwittingly set the tide against Wilhite on the GIF’s favorite platform, despite the fact that I personally agreed with him.

The Frogman, a key innovator of the animated GIF form, put it as such: “It’s like someone trying to tell you ‘Sun’ is actually pronounced wombatnards.”

But in many ways, the situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he literally built it himself. It was not the creation of a huge group of people from different parts of the corporate world. He was handed the project as a CompuServe employee in 1987. He produced the object, and that was that. The initial document describing how it works? Dead simple. 37 years later, we’re still using the GIF.

The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one.

Built with input from dozens of stakeholders, the goal of the Joint Photographic Experts Group was ultimately to create a format that fit everyone’s needs. And when the format was finally unleashed on the world, it was the subject of a 600-plus-page book.

And that book, not going to lie, has a killer cover:

Look at this hip cover; excellent example of 1992 design.

JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy:

JPEG now stands at the threshold of widespread use in diverse applications. Many new technologies are converging to help make this happen. High-quality continuous-tone color displays are now a part of most personal computing systems. Most of these systems measure their storage in megabytes, and the processing power at the desk is approaching that of mainframes of just a few years ago. Communication over telephone lines is now routinely at 9,600 baud, and with each year modem capabilities improve. LANs are now in widespread use. CD-ROM and other mass-storage devices are opening up the era of electronic books. Multimedia applications promise to use vast numbers of images and digital cameras are already commercially available.

These technology trends are opening up both a capability and a need for digital continuous-tone color images. However, until JPEG compression came upon the scene, the massive storage requirement for large numbers of high-quality images was a technical impediment to widespread use of images. The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area), but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications. JPEG has provided a high-quality yet very practical and simple solution to this problem.

And honestly, they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress every JPEG file to within an inch of its life, having that capability helped enable the modern internet.

(The book, which both tries to explain the way JPEG works for the layperson and through in-depth mathematical equations, is on the Internet Archive for one-hour checkout, by the way, but its layout is completely messed up, sadly.)

As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. One of the first patents filed by Mitchell and Pennebaker around image compression, filed in 1988 and granted in 1990, described an “apparatus and method for compressing and de-compressing binary decision data by arithmetic coding and decoding wherein the estimated probability Qe of the less probable of the two decision events, or outcomes, adapts as decisions are successively encoded.” Another, also tied to Pennebaker and Mitchell, described an “apparatus and method for adapting the estimated probability of either the less likely or more likely outcome (event) of a binary decision in a sequence of binary decisions involves the updating of the estimated probability in response to the renormalization of an augend A.”

That likely reads like gibberish to you, but essentially, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings.

At 85% quality, it is down to about 336k, which means that dropping just 15% of quality saved us two thirds of the file size.

Each brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has more than stood the test of time.

All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2, which supported the underlying technology of JPEG as early as 1990 through the OS/2 Image Support application. (The announcement of the support went under the radar, being announced as “Image compression and decompression capability for color and gray images in addition to bilevel images,” but Pennebaker and Mitchell make clear in their book that this coding appeared in OS/2 Image Support first.)

Hearing that there was a “first application” associated with JPEG brought me down a rabbit hole. I did a long search for this application yesterday, trying to find as much info as possible about it. My process involved setting up an OS/2 VM and a modern web browser, so I could run any OS/2 applications related to this.

But it was all for naught, but it did lead to an entertaining Mastodon thread. Unfortunately, what I thought would bring me a step closer to an application led me to a text file describing the application.

Any IBM employees with a copy of OS/2 Image Support lying around? You’re holding the starting point of modern-day computerized photography.

 
 

“The purpose of image compression is to represent images with less data in order to save storage cost or transmission time and costs. Obviously, the less data required to represent the image, the better, provided there is no penalty in obtaining a greater reduction. However, the most effective compression is achieved by approximating the original image (rather than reproducing it exactly), and the greater the compression, the more approximate (‘lossy’) the rendition is likely to be.”

— A description of the goals of the JPEG format, according to JPEG: Still Image Data Compression Standard. In many ways, the JPEG was intended to be a format that could be perfect when it needed to be, but good enough when the circumstances didn’t allow for perfection.

 
 

That same forest, saved at 65%, using a progressive load. Down to about 200k. This will load faster. However progressive images load so fast now that you may not even notice the progressive load unless you’re on a slow internet connection or a slow computer.

What a JPEG does when you heavily compress it

The thing that differentiates a JPEG file from a PNG or a GIF is the nature of its compression. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. The idea is to make it so that you can display something that looks close to the original image in fewer bytes.

Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing extreme details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result.

The algorithm, developed by researchers Nasir Ahmed, T. Natarajan, and K. R. Rao in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data comes out like a faucet, or like a volume control. The more data you want, the higher the setting. Essentially, DCT allows a trickle of data to still come out even in highly compromised situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it.

That is dumbed down significantly, because we are not a technical publication. However, if you want a more technical but still somewhat easy-to-follow description of DCT, I recommend this clip from Computerphile, featuring a description of compression from computer imaging researcher Mike Mound, who uses the wales on the jumper he’s wearing to break down how cosine transform functions.

DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you are witnessing DCT being utilized in real time.

A JPEG file doesn’t have to leverage the DCT in just one way, as JPEG: Still Image Data Compression Standard explains:

The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements.

The toolkit has four modes, which work in these ways:

  • Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down
  • Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in
  • Sequential lossless, which uses the window shade format but doesn’t compress the image
  • Hierarchial mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result

At the time the JPEG was being created, modems were extremely common, and that meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop.

Down to 30%. About 120k. Still looks like a photo!

When an image is compressed with DCT, it tends to be less noticeable in areas of the image where there’s a lot of activity going on. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—you know, like text on a page. (Which is why if you have a picture of text, you shouldn’t share it in a JPG format unless it is high resolution or you can live with the degradation.)

Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Notably, PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in more ways than one before his untimely passing. How is there not a dramatic movie about Phil Katz?)

In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way.

 
 

NO

The answer to the question, “Did NCSA Mosaic initially support inline JPEG files?” It’s surprising today, given the absolute ubiquity of the JPG format, but the browser that started the visual internet did not initially support JPG files without the use of an external reader. (It supported inline GIF files, however, along with the largely forgotten XBitMap format.) Support came in 1995, but by that point, Netscape Navigator had come out—explicitly promoting its offering of inline JPEG support as a marquee feature.

 
 

That same forest, at 15%. We are now down to 71k.

How a so-called patent troll was able to make bank off the JPEG in the early 2000s

If you’re a patent holder, the best kind of patent to hold is one that has been largely forgotten about, but is the linchpin of a common piece of technology already used by millions of people.

This is arguably what happened in 1986, when Compression Labs employees Wen-Hsiung Chen and Daniel J. Klenke filed what became U.S. patent 4,698,672, “Coding system for reducing redundancy,” which dealt with a way to improve signal processing for motion graphics, so they took up less space in distribution. This arguably overlapped with what the JPEG format was doing. They had created a ticking time bomb for the computer industry. Someone just needed to find it.

And find it they did. In 1997, a company named Forgent Networks acquired Compression Labs, and in 2002, Forgent claimed this patent effectively gave them partial ownership of the JPEG format in various settings, including digital cameras. They started filing patent lawsuits—and winning, big.

"The patent, in some respects, is a lottery ticket," Forgent Chief Financial Officer Jay Peterson told CNET in 2005. "If you told me five years ago that 'You have the patent for JPEG,' I wouldn't have believed it."

Now, if this situation sounds familiar to you, it’s because a better-known company, Unisys, had done this exact same thing nearly a decade prior, except with the GIF format. The company began threatening CompuServe and others at a time when the GIF was the internet’s favorite file format. Unisys apparently had no qualms with being unpopular with internet users of the era, and charged website owners $5,000 to use GIFs. Admittedly, that company had a more cut-and-dry case for doing so, as the firm directly owned the Lempel–Ziv–Welch (LZW) compression format that GIFs used, as it was created by employees of its predecessor company, Sperry. (This led to the creation of the patent-free PNG format in 1995.)

We’re now at 7%—and just over 30k. We are now 1/33rd the size of the file at the top of the document. Check out the color degradation on this one.

But Forgent, despite having a far more tenuous claim on its rights ownership to the JPEG compression algorithm, was nonetheless much more successful in drawing money from patent lawsuits against JPEG users, earning more than $100 million from digital camera makers during the early 2000s before the patent finally ran out of steam around 2007. The company also attempted to convince PC makers to give them a billion dollars, before being talked down to a mere $8 million.

In the process of trying to squeeze cash from an old patent, their claims grew increasingly controversial. Eventually, the patent was narrowed in scope to only motion-based uses, i.e. video. On top of that, evidence of prior art was uncovered because patent troll critics were understandably pissed off when Forgent started suing in 2004.

(The company tried expanding its patent-trolly horizons during this period. It began threatening DVR-makers over a separate patent that described recording TV shows to a computer.)

Forgent Networks no longer exists under that name. In 2007, just as the compression patent expired, the company renamed itself to Asure Software, which specializes in payroll and HR solutions. They used their money to get out of the patent-trolling game, which I guess is somewhat noble.

 
 

200M

The estimated number of images that the Library of Congress has inventoried in JPEG 2000 format, a successor standard to the JPEG first released in 2001. The flexible update of the original JPEG format added better compression performance, but required more computational power. The original JPEG format is much more popular, but JPEG 2000 has found success in numerous niches.

 
 

The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. However, it is less an end-user format and more specialized.

JP2s are harder to find on the open web—one of the few places online that I see them happens to be the Internet Archive. (Which means the Internet Archive served images from that JPEG book in JP2 format.)

Our forest, saved at 1% quality. So much of the detail has been removed, yet you can still tell what it is. This image is only about 15k in size. That’s the power of the JPG.

Other image technologies have had somewhat more luck getting past the JPG format, with the Google-supported WebP format proving popular with website developers (if controversial for the folks actually saving images), and the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000.

The JPEG will be difficult to kill at this juncture. These days, the format is similar to the MP3 file or the ZIP format—two legacy formats too popular to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start.

Shaking off the JPG is easier said than done. I think most people will be fine to keep it around.

--

Find this one fascinating? Share it with a pal!

And if you’re looking for a tech-news roundup, TLDR is a great choice. Give ’em a look!



Read the whole story
digdoug
36 days ago
reply
I do love a good nerdy deep dive on things I cared about 30 years ago.
Louisville, KY
Share this story
Delete

I am Jim Henson / The Playful Eye

2 Comments

Books That Belong On Paper first appeared on the web as Wink Books and was edited by Carla Sinclair. Sign up here to get the issues a week early in your inbox.


A CHILDREN’S BOOK ABOUT THE LIFE OF JIM HENSON

I am Jim Henson (Ordinary People Change the World)
by Brad Meltzer, Christopher Eliopoulos (Illustrator)
Dial Books
2017, 40 pages, 7.8 x 0.4 x 7.8 inches, Hardcover

Buy on Amazon

If you grew up at a certain time there were people that were icons. Way past the rank of celebrity, bigger than characters, they were men and women whose beings and creations were intertwined into the very fabric of the things we loved to watch, read, and do. And if you were anything like me, one of those people was Jim Henson. From Sesame Street to the Muppets to (especially for me) Labyrinth, his creations and those that he curated and inspired weaved themselves deeply into the pop culture interests of kids all over the world. They were like the air, they just existed around us and we felt like it was part of the natural order of things.

But in the end Jim Henson was just a person, just an ordinary human who started life simply and lived his life from there. Along the way, however, he changed the world with a piece of cloth he took from his mother’s coat and a ping pong ball.

Creating Kermit the Frog is just one of the stories that you’ll find in I Am Jim Henson, a great entry in the ongoing series “Ordinary People Change the World” by Brad Meltzer and Christopher Eliopoulos (this is one of their more recent releases but the series covers important and fascinating figures like Rosa Parks, Jane Goodall, and George Washington among many others). If you’re familiar with Eliopoulos’ work then you know that you’re in for a visual treat and you won’t be let down. I’ve found his Bill Watterson-inspired art a treat for years now and this series is the perfect showcase for it. It’s cute, it’s funny, and both kids and adults will love it.

I’ve honestly read very little Meltzer as I’m not a big fan of thriller novels or the comic book work of his that I’ve dipped my toes into, but his approach here is fascinating and really resonates with the reader. Meltzer uses Henson as his narrator, even going so far as to use many actual quotes by Henson as dialogue, and as a narrator he’s not telling the reader about his life, he’s telling the reader a story about his life. The distinction is important as the book becomes a testament to storytelling, enriching perhaps the greatest accomplishment that Henson and his co-creators (many of whom are characters in the book) ever made: Using impersonal and inanimate objects to create lively stories that could make a viewer laugh, cry, or think without spending a single moment thinking about the fact that a piece of cloth and a pair of hands was making it happen. Meltzer pulls the same trick in this book, turning an autobiographical book into a parable about the power of storytelling. Not a bad bit of slight-of-hand for something intended to be read by (or to) rugrats.

The book also hones in on the concept of “goodness” that was a hallmark of Henson & Co.’s work and Meltzer builds up to it carefully throughout the story, making the impact ring soundly. Henson believed that this goodness was the key ingredient to his work and that comedy didn’t have to be mean to be funny and you don’t have to be funny by being mean. His approach was validated by the immense popularity of the work he was a part of and while it’s not, of course, the only approach, it put Sesame Street and the Muppets in the heart of millions and millions of people.

And that’s a story worth telling.

– Rob Trevino


THE PLAYFUL EYE IS A VIRTUAL FEAST OF GAMES AND VISUAL TRICKS GATHERED FROM AROUND THE WORLD

The Playful Eye: An Album of Visual Delight
by Julian Rothenstein, Mel Gooding
Chronicle Books
2000, 112 pages, 9.9 x 0.5 x 12.7 inches, Paperback

Buy on Amazon

These vintage cards and old placards display optical illusions, visual witticisms, hidden images, rebuses, and artistic paradoxes from yesteryear. They were the equivalent of Gifs back then — eye candy worth sharing. Here they are gathered in a oversized paperback for your entertainment and amazement.

– Kevin Kelly

Read the whole story
digdoug
58 days ago
reply
I've given away a few copies of that first book.
It's a good one.
Louisville, KY
Share this story
Delete
1 public comment
GaryBIshop
59 days ago
reply
Sounds great!

The Lunacy of Artemis

1 Comment and 2 Shares
In August 2020, the New York Times asked me to write an op-ed for a special feature on authoritarianism and democracy. They declined to publish my submission, which I am sharing here instead.

distant photo of Artemis rocket on launch pad

A little over 51 years ago, a rocket lifted off from Cape Canaveral carrying three astronauts and a space car. After a three day journey to the moon, two of the astronauts climbed into a spindly lander and made the short trip down to the surface, where for another three days they collected rocks and did donuts in the space car. Then they climbed back into the lander, rejoined their colleague in orbit, and departed for Earth. Their capsule splashed down in the South Pacific on December 19, 1972. This mission, Apollo 17, would be the last time human beings ventured beyond low Earth orbit.

If you believe NASA, late in 2026 Americans will walk on the moon again. That proposed mission is called Artemis 3, and its lunar segment looks a lot like Apollo 17 without the space car. Two astronauts will land on the moon, collect rocks, take selfies, and about a week after landing rejoin their orbiting colleagues to go back to Earth.

But where Apollo 17 launched on a single rocket and cost $3.3 billion (in 2023 dollars), the first Artemis landing involves a dozen or two heavy rocket launches and costs so much that NASA refuses to give a figure (one veteran of NASA budgeting estimates it at $7-10 billion).[1] The single-use lander for the mission will be the heaviest spacecraft ever flown, and yet the mission's scientific return—a small box of rocks—is less than what came home on Apollo 17. And the whole plan hinges on technologies that haven't been invented yet becoming reliable and practical within the next eighteen months.

You don’t have to be a rocket scientist to wonder what’s going on here. If we can put a man on the moon, then why can't we just go do it again? The moon hasn’t changed since the 1960’s, while every technology we used to get there has seen staggering advances. It took NASA eight years to go from nothing to a moon landing at the dawn of the Space Age. But today, twenty years and $93 billion after the space agency announced our return to the moon, the goal seems as far out of reach as ever.[2]

Articles about Artemis often give the program’s tangled backstory. But I want to talk about Artemis as a technical design, because there’s just so much to drink in. While NASA is no stranger to complex mission architectures, Artemis goes beyond complex to the just plain incoherent. None of the puzzle pieces seem to come from the same box. Half the program requires breakthrough technologies that make the other half unnecessary. The rocket and spacecraft NASA spent two decades building can’t even reach the moon. And for reasons no one understands, there’s a new space station in the mix.

In the past, whatever oddball project NASA came up with, we at least knew they could build the hardware. But Artemis calls the agency’s competence as an engineering organization into question. For the first time since the early 1960's, it's unclear whether the US space agency is even capable of putting astronauts on the Moon.

Photograph of SLS rocket

A Note on Apollo

In this essay I make a lot of comparisons to Project Apollo. This is not because I think other mission architectures are inferior, but because the early success of that program sets such a useful baseline. At the dawn of the Space Age, using rudimentary technology, American astronauts landed on the moon six times in seven attempts. The moon landings were NASA’s greatest achievement and should set a floor for what a modern mission, flying modern hardware, might achieve.

Advocates for Artemis insist that the program is more than Apollo 2.0. But as we’ll see, Artemis can't even measure up to Apollo 1.0. It costs more, does less, flies less frequently, and exposes crews to risks that the steely-eyed missile men of the Apollo era found unacceptable. It's as if Ford in 2024 released a new model car that was slower, more accident-prone, and ten times more expensive than the Model T.

When a next-generation lunar program can’t meet the cost, performance, or safety standards set three generations earlier, something has gone seriously awry.

Photograph of SLS rocket

I. The Rocket

The jewel of Artemis is a big orange rocket with a flavorless name, the Space Launch System (SLS). SLS looks like someone started building a Space Shuttle and ran out of legos for the orbiter. There is the familiar orange tank, a big white pair of solid rocket boosters, but then the rocket just peters out in a 1960’s style stack of cones and cylinders.

The best way to think of SLS is as a balding guy with a mullet: there are fireworks down below that are meant to distract you from a sad situation up top. In the case of the rocket, those fireworks are a first stage with more thrust than the Saturn V, enough thrust that the boosted core stage can nearly put itself into orbit. But on top of this monster sits a second stage so anemic that even its name (the Interim Cryogenic Propulsion Stage) is a kind of apology. For eight minutes SLS roars into the sky on a pillar of fire. And then, like a cork popping out of a bottle, the tiny ICPS emerges and drifts vaguely moonwards on a wisp of flame.

With this design, the minds behind SLS achieved a first in space flight, creating a rocket that is at the same time more powerful and less capable than the Saturn V. While the 1960’s giant could send 49 metric tons to the Moon, SLS only manages 27 tons—not enough to fly an Apollo-style landing, not enough to even put a crew in orbit around the Moon without a lander. The best SLS can do is slingshot the Orion spacecraft once around the moon and back, a mission that will fly under the name Artemis 2.

NASA wants to replace ICPS with an ‘Exploration Upper Stage’ (the project has been held up, among other things, by a near-billion dollar cost overrun on a launch pad). But even that upgrade won’t give SLS the power of the Saturn V. For whatever reason, NASA designed its first heavy launcher in forty years to be unable to fly the simple, proven architecture of the Apollo missions.

Of course, plenty of rockets go on to enjoy rewarding, productive careers without being as powerful as the Saturn V. And if SLS rockets were piling up at the Michoud Assembly Facility like cordwood, or if NASA were willing to let its astronauts fly commercial, it would be a simple matter to split Artemis missions across multiple launches.

But NASA insists that astronauts fly SLS. And SLS is a “one and done” rocket, artisanally hand-crafted by a workforce that likes to get home before traffic gets bad. The rocket can only launch once every two years at a cost of about four billion dollars[3]—about twice what it would cost to light the rocket’s weight in dollar bills on fire[4].

Early on, SLS designers made the catastrophic decision to reuse Shuttle hardware, which is like using Fabergé eggs to save money on an omelette. The SLS core stage recycles Space Shuttle main engines, actual veterans of old Shuttle flights called out of retirement for one last job. Refurbishing a single such engine to work on SLS costs NASA $40 million, or a bit more than SpaceX spends on all 33 engines on its Superheavy booster.[5] And though the Shuttle engines are designed to be fully reusable (the main reason they're so expensive), every SLS launch throws four of them away. Once all the junkyards are picked clean, NASA will pay Aerojet Rocketdyne to restart production of the classic engine at a cool unit cost of $145 million[6].

The story is no better with the solid rocket boosters, the other piece of Shuttle hardware SLS reuses. Originally a stopgap measure introduced to save the Shuttle budget, these heavy rockets now attach themselves like barnacles to every new NASA launcher design. To no one’s surprise, retrofitting a bunch of heavy steel casings left over from Shuttle days has saved the program nothing. Each SLS booster is now projected to cost $266 million, or about twice the launch cost of a Falcon Heavy.[7] Just replacing the asbestos lining in the boosters with a greener material, a project budgeted at $4.4M, has now cost NASA a quarter of a billion dollars. And once the leftover segments run out seven rockets from now, SLS will need a brand new booster design, opening up fertile new vistas of overspending.

Costs on SLS have reached the point where private industry is now able to develop, test, and launch an entire rocket program for less than NASA spends on a single engine[8]. Flying SLS is like owning a classic car—everything is hand built, the components cost a fortune, and when you finally get the thing out of the shop, you find yourself constantly overtaken by younger rivals.

But the cost of SLS to NASA goes beyond money. The agency has committed to an antiquated frankenrocket just as the space industry is entering a period of unprecedented innovation. While other space programs get to romp and play with technologies like reusable stages and exotic alloys, NASA is stuck for years wasting a massive, skilled workforce on a dead-end design.

The SLS program's slow pace also affects safety. Back in the Shuttle era, NASA managers argued that it took three to four launches a year to keep workers proficient enough to build and launch the vehicles safely. A boutique approach where workers hand-craft one rocket every two years means having to re-learn processes and procedures with every launch.

It also leaves no room in Artemis for test flights. The program simply assumes success, flying all its important 'firsts' with astronauts on board. When there are unanticipated failures, like the extensive heat shield spalling and near burn-through observed in Artemis 1,[9] the agency has no way to test a proposed fix without a multi-year delay to the program. So they end up using indirect means to convince themselves that a new design is safe to fly, a process ripe for error and self-delusion.

Orion space capsule with OVERSIZE LOAD banner

II. The Spacecraft

Orion, the capsule that launches on top of SLS, is a relaxed-fit reimagining of the Apollo command module suitable for today’s larger astronaut. It boasts modern computers, half again as much volume as the 1960’s design, and a few creature comforts (like not having to poop in a baggie) that would have pleased the Apollo pioneers.

The capsule’s official name is the Orion Multipurpose Crew Vehicle, but finding even a single purpose for Orion has greatly challenged NASA. For twenty years the spacecraft has mostly sat on the ground, chewing through a $1.2 billion annual budget. In 2014, the first Orion flew a brief test flight. Eight short years later, Orion launched again, carrying a crew of instrumented mannequins around the Moon on Artemis 1. In 2025 the capsule (by then old enough to drink) is supposed to fly human passengers on Artemis 2.

Orion goes to space attached to a basket of amenities called the European Service Module. The ESM provides Orion with solar panels, breathing gas, batteries, and a small rocket that is the capsule’s principal means of propulsion. But because the ESM was never designed to go to the moon, it carries very little propellant—far too little to get the hefty capsule in and out of lunar orbit.[10]

And Orion is hefty. Originally designed to hold six astronauts, the capsule was never resized when the crew requirement shrank to four. Like an empty nester’s minivan, Orion now hauls around a bunch of mass and volume that it doesn’t need. Even with all the savings that come from replacing Apollo-era avionics, the capsule weighs almost twice as much as the Apollo Command Module.

This extra mass has knock-on effects across the entire Artemis design. Since a large capsule needs a large abort rocket, SLS has to haul Orion's massive Launch Abort System—seven tons of dead weight—nearly all the way into orbit. And reinforcing the capsule so that abort system won't shake the astronauts into jelly means making it heavier, which puts more demand on the parachutes and heat shield,[11] and around and around we go.

Orion space capsule with OVERSIZE LOAD banner

Size comparison of the Apollo command and service module (left) and Orion + European Service Module (right)

What’s particularly frustrating is that Orion and ESM together have nearly the same mass as the Apollo command and service modules, which had no trouble reaching the Moon. The difference is all in the proportions. Where Apollo was built like a roadster, with a small crew compartment bolted onto an oversized engine, Orion is the Dodge Journey of spacecraft—a chunky, underpowered six-seater that advertises to the world that you're terrible at managing money.

diagram of near-rectilinear halo orbit

III. The Orbit

The fact that neither its rocket or spaceship can get to the Moon creates difficulties for NASA’s lunar program. So, like an aging crooner transposing old hits into an easier key, the agency has worked to find a ‘lunar-adjacent’ destination that its hardware can get to.

Their solution is a bit of celestial arcana called Near Rectilinear Halo Orbit, or NRHO. A spacecraft in this orbit circles the moon every 6.5 days, passing 1,000 kilometers above the lunar north pole at closest approach, then drifting out about 70,000 kilometers (a fifth of the Earth/Moon distance) at its furthest point. Getting to NRHO from Earth requires significantly less energy than entering a useful lunar orbit, putting it just within reach for SLS and Orion.[12]

To hear NASA tell it, NRHO is so full of advantages that it’s a wonder we stay on Earth. Spacecraft in the orbit always have a sightline to Earth and never pass through its shadow. The orbit is relatively stable, so a spacecraft can loiter there for months using only ion thrusters. And the deep space environment is the perfect place to practice going to Mars.

But NRHO is terrible for getting to the moon. The orbit is like one of those European budget airports that leaves you out in a field somewhere, requiring an expensive taxi. In Artemis, this taxi takes the form of a whole other spaceship—the lunar lander—which launches without a crew a month or two before Orion and is supposed to be waiting in NRHO when the capsule arrives.

Once these two spacecraft dock together, two astronauts climb into the lander from Orion and begin a day-long descent to the lunar surface. The other two astronauts wait for them in NRHO, playing hearts and quietly absorbing radiation.

Apollo landings also divided the crew between lander and orbiter. But those missions kept the command module in a low lunar orbit that brought it over the landing site every two hours. This proximity between orbiter and lander had enormous implications for safety. At any point in the surface mission, the astronauts on the moon could climb into the ascent rocket, hit the big red button, and be back sipping Tang with the command module pilot by bedtime. The short orbital period also gave the combined crew a dozen opportunities a day to return directly to Earth. [13]

Sitting in NRHO makes abort scenarios much harder. Depending on when in the mission it happens, a stricken lander might need three or more days to catch up with the orbiting Orion. In the worst case, the crew might find themselves stuck on the lunar surface for hours after an abort is called, forced to wait for Orion to reach a more favorable point in its orbit. And once everyone is back on Orion, more days might pass before the crew can depart for Earth. These long and variable abort times significantly increase risk to the crew, making many scenarios that were survivable on Apollo (like Apollo 13!) lethal on Artemis. [14]

The abort issue is just one example of NRHO making missions slower. NASA likes to boast that Orion can stay in space far longer than Apollo, but this is like bragging that you’re in the best shape of your life after the bank repossessed your car. It's an oddly positive spin to put on bad life choices. The reason Orion needs all that endurance is because transit times from Earth to NRHO are long, and the crew has to waste additional time in NRHO waiting for orbits to line up. The Artemis 3 mission, for example, will spend 24 days in transit, compared to just 6 days on Apollo 11.

NRHO even dictates how long astronauts stay on the Moon—surface time has to be a multiple of the 6.5 day orbital period. This lack of flexibility means that even early flag-and-footprints missions like Artemis 3 have to spend at least a week on the moon, a constraint that adds considerable risk to the initial landing. [15]

In spaceflight, brevity is safety. There's no better way to protect astronauts from the risks of solar storms, mechanical failure, and other mishaps than by minimizing slack time in space. Moreover, a safe architecture should allow for a rapid return to Earth at any point in the mission. There’s no question astronauts on the first Artemis missions would be better off with Orion in low lunar orbit. The decision to stage from NRHO is an excellent example of NASA designing its lunar program in the wrong direction—letting deficiencies in the hardware dictate the level of mission risk. 

diagram of Gateway

Early diagram of Gateway. Note that the segment marked 'human lander system' now dwarfs the space station.

IV. Gateway

I suppose at some point we have to talk about Gateway. Gateway is a small modular space station that NASA wants to build in NRHO. It has been showing up across various missions like a bad smell since before 2012.

Early in the Artemis program, NASA described Gateway as a kind of celestial truck stop, a safe place for the lander to park and for the crew to grab a cup of coffee on their way to the moon. But when it became clear that Gateway would not be ready in time for Artemis 3, NASA re-evaluated. Reasoning that two spacecraft could meet up in NRHO just as easily as three, the agency gave permission for the first moon landing to proceed without a space station.

Despite this open admission that Gateway is unnecessary, building the space station remains the core activity of the Artemis program. The three missions that follow that first landing are devoted chiefly to Gateway assembly. In fact, initial plans for Artemis 4 left out a lunar landing entirely, as if it were an inconvenience to the real work being done up in orbit.

This is a remarkable situation. It’s like if you hired someone to redo your kitchen and they started building a boat in your driveway. Sure, the boat gives the builders a place to relax, lets them practice tricky plumbing and finishing work, and is a safe place to store their tools. But all those arguments will fail to satisfy. You still want to know what building a boat has to do with kitchen repair, and why you’re the one footing the bill.

NASA has struggled to lay out a technical rationale for Gateway. The space station adds both cost and complexity to Artemis, a program not particularly lacking in either. Requiring moon-bound astronauts to stop at Gateway also makes missions riskier (by adding docking operations) while imposing a big propellant tax. Aerospace engineer and pundit Robert Zubrin has aptly called the station a tollbooth in space.

Even Gateway defenders struggle to hype up the station. A common argument is that Gateway may not ideal for any one thing, but is good for a whole lot of things. But that is the same line of thinking that got us SLS and Orion, both vehicles designed before anyone knew what to do with them. The truth is that all-purpose designs don't exist in human space flight. The best you can do is build a spacecraft that is equally bad at everything.

But to search for technical grounds is to misunderstand the purpose of Gateway. The station is not being built to shelter astronauts in the harsh environment of space, but to protect Artemis in the harsh environment of Congress. NASA needs Gateway to navigate an uncertain political landscape in the 2030’s. Without a station, Artemis will just be a series of infrequent multibillion dollar moon landings, a red cape waved in the face of the Office of Management and Budget. Gateway armors Artemis by bringing in international partners, each of whom contributes expensive hardware. As NASA learned building the International Space Station, this combination of sunk costs and international entanglement is a powerful talisman against program death.

Gateway also solves some other problems for NASA. It gives SLS a destination to fly to, stimulates private industry (by handing out public money to supply Gateway), creates a job for the astronaut corps, and guarantees the continuity of human space flight once the ISS becomes uninhabitable sometime in the 2030’s. [16]

That last goal may sound odd if you don’t see human space flight as an end in itself. But NASA is a faith-based organization, dedicated to the principle that taxpayers should always keep an American or two in orbit. it’s a little bit as if the National Oceanic Atmospheric Administration insisted on keeping bathyscapes full of sailors at the bottom of the sea, irrespective of cost or merit, and kneecapped programs that might threaten the continuous human benthic presence. You can’t argue with faith.

From a bureaucrat’s perspective, Gateway is NASA’s ticket back to a golden era in the early 2000's when the Space Station and Space Shuttle formed an uncancellable whole, each program justifying the existence of the other. Recreating this dynamic with Gateway and SLS/Orion would mean predictable budgets and program stability for NASA well into the 2050’s.

But Artemis was supposed to take us back to a different golden age, the golden age of Apollo. And so there’s an unresolved tension in the program between building Gateway and doing interesting things on the moon. With Artemis missions two or more years apart, it’s inevitable that Gateway assembly will push aspirational projects like a surface habitat or pressurized rover out into the 2040’s. But those same projects are on the critical path to Mars, where NASA still insists we’re going in the late 2030’s. The situation is awkward.

So that is the story of Gateway—unloved, ineradicable, and as we’ll see, likely to become the sole legacy of the Artemis program. 

artist's rendering of human landing system'

V. The Lander

The lunar lander is the most technically ambitious part of Artemis. Where SLS, Orion, and Gateway are mostly a compilation of NASA's greatest hits, the lander requires breakthrough technologies with the potential to revolutionize space travel.

Of course, you can’t just call it a lander. In Artemis speak, this spacecraft is the Human Landing System, or HLS. NASA has delegated its design to two private companies, Blue Origin and SpaceX. SpaceX is responsible for landing astronauts on Artemis 3 and 4, while Blue Origin is on the hook for Artemis 5 (notionally scheduled for 2030). After that, the agency will take competitive bids for subsequent missions.

The SpaceX HLS design is based on their experimental Starship spacecraft, an enormous rocket that takes off on and lands on its tail, like 1950’s sci-fi. There is a strong “emperor’s new clothes” vibe to this design. On the one hand, it is the brainchild of brilliant SpaceX engineers and passed NASA technical review. On the other hand, the lander seems to go out of its way to create problems for itself to solve with technology.

artist's rendering of human landing system'

An early SpaceX rendering of the Human Landing System, with the Apollo Lunar Module added for scale.

To start with the obvious, HLS looks more likely to tip over than the last two spacecraft to land on the moon, which tipped over. It is a fifteen story tower that must land on its ass in terrible lighting conditions, on rubble of unknown composition, over a light-second from Earth. The crew are left suspended so high above the surface that they need a folding space elevator (not the cool kind) to get down. And yet in the end this single-use lander carries less payload (both up and down) than the tiny Lunar Module on Apollo 17. Using Starship to land two astronauts on the moon is like delivering a pizza with an aircraft carrier.

Amusingly, the sheer size of the SpaceX design leaves it with little room for cargo. The spacecraft arrives on the Moon laden with something like 200 tons of cryogenic propellant,[14] and like a fat man leaving an armchair, it needs every drop of that energy to get its bulk back off the surface. Nor does it help matters that all this cryogenic propellant has to cook for a week in direct sunlight.

Other, less daring lander designs reduce their appetite for propellant by using a detachable landing stage. This arrangement also shields the ascent rocket from hypervelocity debris that gets kicked up during landing. But HLS is a one-piece rocket; the same engines that get sandblasted on their way down to the moon must relight without fail a week later.

Given this fact, it’s remarkable that NASA’s contract with SpaceX doesn’t require them to demonstrate a lunar takeoff. All SpaceX has to do to satisfy NASA requirements is land an HLS prototype on the Moon. Questions about ascent can then presumably wait until the actual mission, when we all find out together with the crew whether HLS can take off again.[15]

This fearlessness in design is part of a pattern with Starship HLS. Problems that other landers avoid in the design phase are solved with engineering. And it’s kind of understandable why SpaceX does it this way. Starship is meant to fly to Mars, a much bigger challenge than landing two people on the Moon. If the basic Starship design can’t handle a lunar landing, it would throw the company’s whole Mars plan into question. SpaceX is committed to making Starship work, which is different from making the best possible lunar lander.

Less obvious is why NASA tolerates all this complexity in the most hazardous phase of its first moon mission. Why land a rocket the size of a building packed with moving parts? It’s hard to look at the HLS design and not think back to other times when a room full of smart NASA people talked themselves into taking major risks because the alternative was not getting to fly at all.

It’s instructive to compare the HLS approach to the design philosophy on Apollo. Engineers on that progam were motivated by terror; no one wanted to make the mistake that would leave astronauts stranded on the moon. The weapon they used to knock down risk was simplicity. The Lunar Module was a small metal box with a wide stance, built low enough so that the astronauts only needed to climb down a short ladder. The bottom half of the LM was a descent stage that completely covered the ascent rocket (a design that showed its value on Apollo 15, when one of the descent engines got smushed by a rock). And that ascent rocket, the most important piece of hardware in the lander, was a caveman design intentionally made so primitive that it would struggle to find ways to fail.

On Artemis, it's the other way around: the more hazardous the mission phase, the more complex the hardware. It's hard to look at all this lunar machinery and feel reassured, especially when NASA's own Aerospace Safety Advisory Panel estimates that the Orion/SLS portion of a moon mission alone (not including anything to do with HLS) already has a 1:75 chance of killing the crew.

artist's rendering of human landing system'

VI. Refueling

Since NASA’s biggest rocket struggles to get Orion into distant lunar orbit, and HLS weighs fifty times as much as Orion, the curious reader might wonder how the unmanned lander is supposed to get up there.

NASA’s answer is, very sensibly, “not our problem”. They are paying Blue Origin and SpaceX the big bucks to figure this out on their own. And as a practical matter, the only way to put such a massive spacecraft into NRHO is to first refuel it in low Earth orbit.

Like a lot of space technology, orbital refueling sounds simple, has never been attempted, and can’t be adequately simulated on Earth.[18] The crux of the problem is that liquid and gas phases in microgravity jumble up into a three-dimensional mess, so that even measuring the quantity of propellant in a tank becomes difficult. To make matters harder, Starship uses cryogenic propellants that boil at temperatures about a hundred degrees colder than the plumbing they need to move through. Imagine trying to pour water from a thermos into a red-hot skillet while falling off a cliff and you get some idea of the difficulties.

To get refueling working, SpaceX will first have to demonstrate propellant transfer between rockets as a proof of concept, and then get the process working reliably and efficiently at a scale of hundreds of tons. (These are two distinct challenges). Once they can routinely move liquid oxygen and methane from Starship A to Starship B, they’ll be ready to set up the infrastructure they need to launch HLS.

artist's rendering of human landing system'

The plan for getting HLS to the moon looks like this: a few months before the landing date, SpaceX will launch a special variant of their Starship rocket configured to serve as a propellant depot. Then they'll start launching Starships one by one to fill it up. Each Starship arrives in low Earth orbit with some residual propellant; it will need to dock with the depot rocket and transfer over this remnant fuel. Once the depot is full, SpaceX will launch HLS, have it fill its tanks at the depot rocket, and send it up to NRHO in advance of Orion. When Orion arrives, HLS will hopefully have enough propellant left on board to take on astronauts and make a single round trip from NRHO to the lunar surface.

Getting this plan to work requires solving a second engineering problem, how to keep cryogenic propellants cold in space. Low earth orbit is a toasty place, and without special measures, the cryogenic propellants Starship uses will quickly vent off into space. The problem is easy to solve in deep space (use a sunshade), but becomes tricky in low Earth orbit, where a warm rock covers a third of the sky. (Boil-off is also a big issue for HLS on the moon.)

It’s not clear how many Starship launches it will take to refuel HLS. Elon Musk has said four launches might be enough; NASA Assistant Deputy Associate Administrator Lakiesha Hawkins says the number is in the “high teens”. Last week, SpaceX's Kathy Lueders gave a figure of fifteen launches.

The real number is unknown and will come down to four factors:

  1. How much propellant a Starship can carry to low Earth orbit.
  2. What fraction of that can be usably pumped out of the rocket.
  3. How quickly cryogenic propellant boils away from the orbiting depot.
  4. How rapidly SpaceX can launch Starships.

SpaceX probably knows the answer to (1), but isn’t talking. Data for (2) and (3) will have to wait for flight tests that are planned for 2025. And obviously a lot is riding on (4), also called launch cadence.

The record for heavy rocket launch cadence belongs to Saturn V, which launched three times during a four month period in 1968. Second place belongs to the Space Shuttle, which flew nine times in the calendar year before the Challenger disaster. In third place is Falcon Heavy, which flew six times in a 13 month period beginning in November 2022.

For the refueling plan to work, Starship will have to break this record by a factor of ten, launching every six days or so across multiple launch facilities. [1] The refueling program can tolerate a few launch failures, as long as none of them damages a launch pad.

There’s no company better prepared to meet this challenge than SpaceX. Their Falcon 9 rocket has shattered records for both reliability and cadence, and now launches about once every three days. But it took SpaceX ten years to get from the first orbital Falcon 9 flight to a weekly cadence, and Starship is vastly bigger and more complicated than the Falcon 9. [20]

Working backwards from the official schedule allows us to appreciate the time pressure facing SpaceX. To make the official Artemis landing date, SpaceX has to land an unmanned HLS prototype on the moon in early 2026. That means tanker flights to fill an orbiting depot would start in late 2025. This doesn’t leave a lot of time for the company to invent orbital refueling, get it working at scale, make it efficient, deal with boil-off, get Starship launching reliably, begin recovering booster stages,[21] set up additional launch facilities, achieve a weekly cadence, and at the same time design and test all the other systems that need to go into HLS.

Lest anyone think I’m picking on SpaceX, the development schedule for Blue Origin’s 2029 lander is even more fantastical. That design requires pumping tons of liquid hydrogen between spacecraft in lunar orbit, a challenge perhaps an order of magnitude harder than what SpaceX is attempting. Liquid hydrogen is bulky, boils near absolute zero, and is infamous for its ability to leak through anything (the Shuttle program couldn't get a handle on hydrogen leaks on Earth even after a hundred some launches). And the rocket Blue Origin needs to test all this technology has never left the ground.

The upshot is that NASA has put a pair of last-minute long-shot technology development programs between itself and the moon. Particularly striking is the contrast between the ambition of the HLS designs and the extreme conservatism and glacial pace of SLS/Orion. The same organization that spent 23 years and 20 billion dollars building the world's most vanilla spacecraft demands that SpaceX darken the sky with Starships within four years of signing the initial HLS contract. While thrilling for SpaceX fans, this is pretty unserious behavior from the nation’s space agency, which had several decades' warning that going to the moon would require a lander.

All this to say, it's universally understood that there won’t be a moon landing in 2026. At some point NASA will have to officially slip the schedule, as it did in 2021, 2023, and at the start of this year. If this accelerating pattern of delays continues, by year’s end we might reach a state of continuous postponement, a kind of scheduling singularity where the landing date for Artemis 3 recedes smoothly and continuously into the future.

Otherwise, it's hard to imagine a manned lunar landing before 2030, if the Artemis program survives that long.

Interior of Skylab

VII. Conclusion

I want to stress that there’s nothing wrong with NASA making big bets on technology. Quite the contrary, the audacious HLS contracts may be the healthiest thing about Artemis. Visionaries at NASA identified a futuristic new energy source (space billionaire egos) and found a way to tap it on a fixed-cost basis. If SpaceX or Blue Origin figure out how to make cryogenic refueling practical, it will mean a big step forward for space exploration, exactly the thing NASA should be encouraging. And if the technology doesn’t pan out, we’ll have found that out mostly by spending Musk’s and Bezos’s money.

The real problem with Artemis is that it doesn’t think through the consequences of its own success. A working infrastructure for orbital refueling would make SLS and Orion superfluous. Instead of waiting two years to go up on a $4 billion rocket, crews and cargo could launch every weekend on cheap commercial rockets, refueling in low Earth orbit on their way to the Moon. A similar logic holds for Gateway. Why assemble a space station out of habitrail pieces out in lunar orbit, like an animal, when you can build one on Earth and launch it in one piece? Better yet, just spraypaint “GATEWAY” on the side of the nearest Starship, send it out to NRHO, and save NASA and its international partners billions. Having a working gas station in low Earth orbit fundamentally changes what is possible, in a way the SLS/Orion arm of Artemis doesn't seem to recognize.

Conversely, if SpaceX and Blue Origin can’t make cryogenic refueling work, then NASA has no plan B for landing on the moon. All the Artemis program will be able to do is assemble Gateway. Promising taxpayers the moon only to deliver ISS Jr. does not broadcast a message of national greatness, and is unlikely to get Congress excited about going to Mars. The hurtful comparisons between American dynamism in the 1960’s and whatever it is we have now will practically write themselves.

What NASA is doing is like an office worker blowing half their salary on lottery tickets while putting the other half in a pension fund. If the lottery money comes through, then there was really no need for the pension fund. But without the lottery win, there’s not enough money in the pension account to retire on. The two strategies don't make sense together.

There’s a ‘realist’ school of space flight that concedes all this but asks us to look at the bigger picture. We’re never going to have the perfect space program, the argument goes, but the important thing is forward progress. And Artemis is the first program in years to survive a presidential transition and have a shot at getting us beyond low Earth orbit. With Artemis still funded, and Starship making rapid progress, at some point we’ll finally see American astronauts back on the moon.

But this argument has two flaws. The first is that it feeds a cycle of dysfunction at NASA that is rapidly making it impossible for us to go anywhere. Holding human space flight to a different standard than NASA’s science missions has been a disaster for space exploration. Right now the Exploration Systems Development Mission Directorate (the entity responsible for manned space flight) couldn’t build a toaster for less than a billion dollars. Incompetence, self-dealing, and mismanagement that end careers on the science side of NASA are not just tolerated but rewarded on the human space flight side. Before we let the agency build out its third white elephant project in forty years, it’s worth reflecting on what we're getting in return for half our exploration budget.

The second, more serious flaw in the “realist” approach is that it enables a culture of institutional mendacity that must ultimately be fatal at an engineering organization. We've reached a point where NASA lies constantly, to both itself and to the public. It lies about schedules and capabilities. It lies about the costs and the benefits of its human spaceflight program. And above all, it lies about risk. All the institutional pathologies identified in the Rogers Report and the Columbia Accident Investigation Board are alive and well in Artemis—groupthink, management bloat, intense pressure to meet impossible deadlines, and a willingness to manufacture engineering rationales to justify flying unsafe hardware.

Do we really have to wait for another tragedy, and another beautifully produced Presidential Commission report, to see that Artemis is broken?

Notes

[1] Without NASA's help, it's hard to put a dollar figure on a mission without making somewhat arbitrary decisions about what to include and exclude. The $7-10 billion estimate comes from a Bush-era official in the Office of Management and Budget commenting on the NASA Spaceflight Forum

And that $7.2B assumes Artemis III stays on schedule. Based on the FY24 budget request, each additional year between Artemis II and Artemis III adds another $3.5B to $4.0B in Common Exploration to Artemis III. If Artemis III goes off in 2027, then it will be $10.8B total. If 2028, then $14.3B.

In other words, it's hard to break out an actual cost while the launch dates for both Artemis II and III keep slipping.

NASA's own Inspector General estimates the cost of just the SLS/Orion portion of a moon landing at $4.1 billion.

[2] The first US suborbital flight, Friendship 7, launched on May 15, 1961. Armstrong and Aldrin landed on the moon eight years and two months later, on July 21, 1969. President Bush announced the goal of returning to the Moon in a January 2004 speech, setting the target date for the first landing "as early as 2015", and no later than 2020.

[3] NASA refuses to track the per-launch cost of SLS, so it's easy to get into nerdfights. Since the main cost driver on SLS is the gigantic workforce employed on the project, something like two or three times the headcount of SpaceX, the cost per launch depends a lot on cadence. If you assume a yearly launch rate (the official line), then the rocket costs $2.1 billion a launch. If like me you think one launch every two years is optimistic, the cost climbs up into the $4-5 billion range.

[4] The SLS weighs 2,600 metric tons fully fueled, and conveniently enough a dollar bill weighs about 1 gram.

[5] SpaceX does not disclose the cost, but it's widely assumed the Raptor engine used on Superheavy costs $1 million.

[6] The $145 million figure comes from dividing the contract cost by the number of engines, caveman style. Others have reached a figure of $100 million for the unit cost of these engines. The important point is not who is right but the fact that NASA is paying vastly more than anyone else for engines of this class.

[7] $250M is the figure you get by dividing the $3.2 billion Booster Production and Operations contract to Northrop Grumman by the number of boosters (12) in the contract. Source: Office of the Inspector General. For cost overruns replacing asbestos, see the OIG report on NASA’s Management of the Space Launch System Booster and Engine Contracts. The Department of Defense paid $130 million for a Falcon Heavy launch in 2023.

[8] Rocket Lab developed, tested, and flew its Electron rocket for a total program cost of $100 million.

[9] In particular, the separation bolts embedded in the Orion heat shield were built based on a flawed thermal model, and need to be redesigned to safely fly a crew. From the OIG report:

Separation bolt melt beyond the thermal barrier during reentry can expose the vehicle to hot gas ingestion behind the heat shield, exceeding Orion’s structural limits and resulting in the breakup of the vehicle and loss of crew. Post-flight inspections determined there was a discrepancy in the thermal model used to predict the bolts’ performance pre-flight. Current predictions using the correct information suggest the bolt melt exceeds the design capability of Orion.

The current plan is to work around these problems on Artemis 2, and then redesign the components for Artemis 3. That means astronauts have to fly at least twice with an untested heat shield design.

[10] Orion/ESM has a delta V budget of 1340 m/s. Getting into and out of an equatorial low lunar orbit takes about 1800 m/s, more for a polar orbit. (See source.)

[11] It takes about 900 m/s of total delta V to get in and out of NHRO, comfortably within Orion/ESM's 1340 m/s budget. (See source.)

[12] In Carrying the Fire, Apollo 11 astronaut Michael Collins recalls carrying a small notebook covering 18 lunar rendezvous scenarios he might be called on to fly in various contingencies. If the Lunar Module could get itself off the surface, there was probably a way to dock with it.

For those too young to remember, Tang is a powdered orange drink closely associated with the American space program.

[13] For a detailed (if somewhat cryptic) discussion of possible Artemis abort modes to NRHO, see HLS NRHO to Lunar Surface and Back Mission Design, NASA 2022.

[14] This is my own speculative guess; the answer is very sensitive to the dry weight of HLS and the boil-off rate of its cryogenic propellants. Delta V from the lunar surface to NRHO is 2,610 m/sec. Assuming HLS weighs 120 tons unfueled, it would need about 150 metric tons of propellant to get into NRHO from the lunar surface. Adding safety margin, fuel for docking operations, and allowing for a week of boiloff gets me to about 200 tons.

[15] The main safety issue is the difficult thermal environment at the landing site, where the Sun sits just above the horizon, heating half the lander. If it weren't for the NRHO constraint, it's very unlikely Artemis 3 would spend more than a day or two on the lunar surface.

[16] The ISS program has been repeatedly extended, but the station is coming up against physical limiting factors (like metal fatigue) that will soon make it too dangerous to use.

[17] Recent comments by NASA suggest SpaceX has voluntarily added an ascent phase to its landing demo, ending a pretty untenable situation. However, there's still no requirement that the unmanned landing/ascent demo be performed using the same lander design that will fly on the actual mission, another oddity in the HLS contract.

[18] To be precise, I'm talking about moving bulk propellant between rockets in orbit. There are resupply flights to the International Space Station that deliver about 850 kilograms of non-cryogenic propellant to boost the station in its orbit, and there have been small-scale experiments in refueling satellites. But no one has attempted refueling a flown rocket stage in space, cryogenic or otherwise.

[19] Both SpaceX's Kathy Lueders and NASA confirm Starship needs to launch from multiple sites. Here's an excerpt from the minutes of the NASA Advisory Council Human Exploration and Operations Committee meeting on November 17 and 20, 2023:

Mr. [Wayne] Hale asked where Artemis III will launch from. [Assistant Deputy AA for Moon to Mars Lakiesha] Hawkins said that launch pads will be used in Florida and potentially Texas. The missions will need quite a number of tankers; in order to meet the schedule, there will need to be a rapid succession of launches of fuel, requiring more than one site for launches on a 6-day rotation schedule, and multiples of launches.

[20] Falcon 9 first flew in June of 2010 and achieved a weekly launch cadence over a span of six launches starting in November 2020.

[21] Recovering Superheavy stages is not a NASA requirement for HLS, but it's a huge cost driver for SpaceX given the number of launches involved.

Read the whole story
digdoug
66 days ago
reply
Jesus this is brutal.
Louisville, KY
WorldMaker
66 days ago
It’s fascinating. SLS versus SpaceX versus Blue Origin. SLS unlikely to succeed. If SpaceX or Blue Origin actually succeed at their crazy goals then SLS is entirely unnecessary. If NASA does everything with SLS it will be a technical miracle. If SpaceX or Blue Origin succeed it will be less of a miracle, but still equally surprising at the timeline given. It’s a huge win for NASA though if Billionaires pay for the actual hard stuff. SpaceX and Blue Origin and some of NASA are doing everything with the idea of the Moon as a “gas station” on the way to Mars, which is more exciting than the Moon anyway. SLS barely gets to ISS, much less the Moon and so badly near sighted at reliving the shuttle glory days instead of moving the program forward. I don’t know who to root for, other than for NASA itself, and maybe against the SLS, as much as I appreciate pork barrels.
Share this story
Delete

The Great Flattening

1 Comment and 2 Shares

Apple did what needed to be done to get that unfortunate iPad ad out of the news; you know, the one that somehow found the crushing of musical instruments and bottles of paint to be inspirational:

The ad was released as a part of the company’s iPad event, and was originally scheduled to run on TV; Tor Myhren, Apple’s vice-president of marketing communications, told AdAge:

Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world…Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.

The apology comes across as heartfelt — accentuated by the fact that an Apple executive put his name to it — but I disagree with Myhren: the reason why people reacted so strongly to the ad is that it couldn’t have hit the mark more squarely.

Aggregation Theory

The Internet, birthed as it was in the idealism of California tech in the latter parts of the 20th century, was expected to be a force for decentralization; one of the central conceits of this blog has been to explain why reality has been so different. From 2015’s Aggregation Theory:

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

Aggregation Theory

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

In short, the analog world was defined by scarcity, which meant distribution of scarce goods was the locus of power; the digital world is defined by abundance, which means discovery of what you actually want to see is the locus of power. The result is that consumers have access to anything, which is to say that nothing is special; everything has been flattened.

  • Google broke down every publication in the world into individual pages; search results didn’t deliver you to the front page of a newspaper or magazine, but rather dropped you onto individual articles.
  • Facebook promoted user-generated content to the same level of the hierarchy as articles from professional publications; your feed might have a picture of your niece followed by a link to a deeply-reported investigative report followed by a meme.
  • Amazon created the “Everything Store” with practically every item on Earth and the capability to deliver it to your doorstep; instead of running errands you could simply check out.
  • Netflix transformed “What’s on?” to “What do you want to watch?”. Everything from high-brow movies to budget flicks to prestige TV to reality TV was on equal footing, ready to be streamed whenever and wherever you wanted.
  • Sites like Expedia and Booking changed travel from an adventure mediated by a travel agent or long-standing brands to search results organized by price and amenities.

Moreover, this was only v1; it turns out that the flattening can go even further:

  • LLMs are breaking down all written text ever into massive models that don’t even bother with pages: they simply give you the answer.
  • TikTok disabused Meta of the notion that your relationships were a useful constraint on the content you wanted to see; now all short-form video apps surface content from across the entire network based on their understanding of what you individually are interested in.
  • Amazon is transforming into a logistics powerhouse befitting the fact that Amazon.com is increasingly dominated by 3rd-party merchant sales, and extending that capability throughout the economy.
  • All of Hollywood, convinced that content was what mattered, jointly killed the linear TV model to ensure that all professionally-produced content was available on-demand, even as YouTube became the biggest streamer of all with user-generated content that is delivered through the exact same distribution channel (apps on a smart device) as the biggest blockbusters.
  • Services like Uber and Airbnb commoditized transportation and lodging to the individual driver or homeowner.

Apple is absent from this list, although the App Store has had an Aggregator effect on developers; the reason the company belongs, though, and why they were the only company that could make an ad that so perfectly captures this great flattening, is because they created the device on which all of these services operate. The prerequisite to the commoditization of everything is access to anything, thanks to the smartphone. “There’s an app for that” indeed:

This is what I mean when I say that Apple’s iPad ad hit the mark: the reason why I think the ad resonated so deeply is that it captured something deep in the gestalt that actually has very little to do with trumpets or guitars or bottles of paint; rather, thanks to the Internet — particularly the smartphone-denominated Internet — everything is an app.

The Bicycle for the Mind

The more tangible way to see in which that iPad ad hit the mark it to play it in reverse:

This is without question the message that Apple was going for: this one device, thin as can be, contains musical instruments, an artist’s studio, an arcade machine, and more. It brings relationships without borders to life, complete with cute emoji. And that’s not wrong!

Indeed, it harkens back to one of Steve Jobs’ last keynotes, when he introduced the iPad 2. My favorite moment in that keynote — one of my favorite Steve Jobs’ keynote moments ever, in fact — was the introduction of GarageBand. You can watch the entire introduction and demo, but the part that stands out in my memory is Jobs — clearly sick, in retrospect — moved by what the company had just produced:

I’m blown away with this stuff. Playing your own instruments, or using the smart instruments, anyone can make music now, in something that’s this thick and weighs 1.3 pounds. It’s unbelievable. GarageBand for iPad. Great set of features — again, this is no toy. This is something you can really use for real work. This is something that, I cannot tell you, how many hours teenagers are going to spend making music with this, and teaching themselves about music with this.

Jobs wasn’t wrong: global hits have originated on GarageBand, and undoubtedly many more hours of (mostly terrible, if my personal experience is any indication) amateur experimentation. Why I think this demo was so personally meaningful for Jobs, though, is that not only was GarageBand about music, one of his deepest passions, but it was also a manifestation of his life’s work: creating a bicycle for the mind.

I remember reading an Article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B, and the condor won: it came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle. Human riding a bicycle blew away the condor, all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes, and so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities.

I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, but I think that’s nothing compared to what’s coming in the next 100 years.

In Jobs’ view of the world, teenagers the world over are potential musicians, who might not be able to afford a piano or guitar or trumpet; if, though, they can get an iPad — now even thinner and lighter! — they can have access to everything they need. In this view “There’s an app for that” is profoundly empowering.

After the Flattening

The duality of Apple’s ad speaks to the reality of technology: its impact is structural, and amoral. When I first started Stratechery I wrote a piece called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction. With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Change is guaranteed, but the type of change is not; never is that more true than today. See, friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

Today that exhortation might run in the opposite direction: in our angst about the removal of specialness and our eagerness to criticize the bad, we ought not lose sight of the potential good.

Start with this site that you are reading: yes, the Internet commoditized content that was previously granted value by virtue of being bundled with a light manufacturing business (i.e. printing presses and delivery trucks), but it also created the opportunity for entirely new kinds of content predicated on reaching niche audiences that are only sustainable when the entire world is your market.

The same principle applies to every other form of content, from music to video to books to art; the extent to which being “special” meant being scarce is the extent to which the existence of “special” meant a constriction of opportunity. Moreover, that opportunity is not a function of privilege but rather consumer demand: the old powers may decry that their content is competing with everyone on the Internet, but they are only losing to the extent that consumers actually prefer to read or watch or listen to something else. Is this supposed to be a bad thing?

Moreover, this is just as much a feather in Apple’s cap as the commoditization of everything is a black mark: Apple creates devices — tools — that let everyone be a creator. Indeed, that is why the ad works in both directions: the flattening of everything means there has been a loss; the flattening of everything also means there is entirely new opportunity.

The AI Choice

One thing I do credit Apple for is not trying to erase the ad from the Internet — it’s still posted on CEO Tim Cook’s X account — because I think it’s important not just as a marker of what has happened over the last several years, but also the choices facing us in the years ahead.

The last time I referenced Steve Jobs’ “Bicycle of the Mind” analogy was in 2018’s Tech’s Two Philosophies, where I contrasted Google and Facebook on one side, and Microsoft and Apple on the other: the former wanted to create products that did things for you; the latter products that let you do more things. This was a simplified characterization, to be sure, but, as I noted in that Article, it was also related to their traditional positions as Aggregators and platforms, respectively.

What is increasingly clear, though, is that Jobs’ prediction that future changes would be even more profound raise questions about the “bicycle for the mind” analogy itself: specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?

The route to the former seems clear, and maybe even the default: this is a world where a small number of entities “own” AI, and we use it — or are used by it — on their terms. This is the outcome being pushed by those obsessed with “safety”, and demanding regulation and reporting; that those advocates also seem to have a stake in today’s leading models seems strangely ignored.

The alternative — MKBHDs For Everything — means openness and commoditization. Yes, those words have downsides: they mean that the powers that be are not special, and sometimes that is something we lament, as I noted at the beginning of this Article. Our alternative, though, is not the gatekept world of the 20th century — we can’t go backwards — but one where the flattening is not the elimination of vitality but the tilling of the ground so that something — many things — new can be created.

Read the whole story
digdoug
72 days ago
reply
This really hits so many nails on the head.
Louisville, KY
Share this story
Delete

An aged creation: Unveiling the LEGO whiskey distillery

1 Share

Take a look at this intriguing LEGO set designed for ages 21 and above. Crafted by builder Versteinert and titled ‘Whiskey Distillery,’ it showcases a plethora of imaginative uses for both common and uncommon pieces, resulting in a creation seemingly tailored for adult enthusiasts. This model serves as the builder’s entry for the third round of the 2024 RogueOlympics, a contest that tasks participants with creating designs using no more than 101 Lego elements. The theme for this round was ‘Volume,’ and I find the approach to such a simple word quite refreshing. Upon closer inspection of the build, one can spot a couple inside-out tires, a selection of Harry Potter wands, a gray cattle horn, and even a magic lamp unique to a certain Disney Villain, among other elements.

Whiskey Distillery

The post An aged creation: Unveiling the LEGO whiskey distillery appeared first on The Brothers Brick.

Read the whole story
digdoug
101 days ago
reply
Louisville, KY
Share this story
Delete
Next Page of Stories