data wrangler
1948 stories
·
38 followers

Artifacting

1 Comment

Today in Tedium: I fully admit it—I stretch images. I also intentionally wash images out, remove as many colors as possible, and save the images in formats that actively degrade the final result. This is a crime against imagery on the internet, an active ignoring of the integrity of the original picture, but to me, I kind of see it as having some artistic advantages. Degradation, you see, is a tenet of the modern internet, something that images have to do to flow through the wires more quickly. Gradually, the wires got fast enough that nearly any still image could be delivered through them in a reasonable amount of time. But the artifacts still matter. The degradation still matters. The JPEG was the puzzle piece that made the visual internet work. With that in mind, today’s Tedium considers how the JPEG came to life. — Ernie @ Tedium

Today’s GIF comes from a Computer Chronicles episode on file compression. Enjoy, nerds.


TLDR

Want a byte-sized version of Hacker News? Try TLDR’s free daily newsletter.

TLDR covers the most interesting tech, science, and coding news in just 5 minutes.

No sports, politics, or weather.

Subscribe for free!


We are going to display this image of a forest at a variety of quality settings. At 100% at 1,200 pixels wide, it is almost a full megabyte. (Entire series by Irina Iriser/Unsplash)

The GIF was a de facto standard. The JPEG was an actual one

I always thought it disappointing that the one time Steve Wilhite truly addressed his audience of admirers in the modern day, he attempted to explain how the file format he invented was pronounced. And it didn’t go over particularly well.

I remember it well. Back in 2013, when he claimed it was pronounced with a soft-G, like the brand of peanut butter. I posted about the quote on ShortFormBlog, and the quote got nearly 5,000 “notes” on Tumblr. Many commenters felt steamed that this random guy emerged after a quarter-century to tell them how their word was supposed to be pronounced. I’m convinced this post unwittingly set the tide against Wilhite on the GIF’s favorite platform, despite the fact that I personally agreed with him.

The Frogman, a key innovator of the animated GIF form, put it as such: “It’s like someone trying to tell you ‘Sun’ is actually pronounced wombatnards.”

But in many ways, the situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he literally built it himself. It was not the creation of a huge group of people from different parts of the corporate world. He was handed the project as a CompuServe employee in 1987. He produced the object, and that was that. The initial document describing how it works? Dead simple. 37 years later, we’re still using the GIF.

The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one.

Built with input from dozens of stakeholders, the goal of the Joint Photographic Experts Group was ultimately to create a format that fit everyone’s needs. And when the format was finally unleashed on the world, it was the subject of a 600-plus-page book.

And that book, not going to lie, has a killer cover:

Look at this hip cover; excellent example of 1992 design.

JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy:

JPEG now stands at the threshold of widespread use in diverse applications. Many new technologies are converging to help make this happen. High-quality continuous-tone color displays are now a part of most personal computing systems. Most of these systems measure their storage in megabytes, and the processing power at the desk is approaching that of mainframes of just a few years ago. Communication over telephone lines is now routinely at 9,600 baud, and with each year modem capabilities improve. LANs are now in widespread use. CD-ROM and other mass-storage devices are opening up the era of electronic books. Multimedia applications promise to use vast numbers of images and digital cameras are already commercially available.

These technology trends are opening up both a capability and a need for digital continuous-tone color images. However, until JPEG compression came upon the scene, the massive storage requirement for large numbers of high-quality images was a technical impediment to widespread use of images. The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area), but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications. JPEG has provided a high-quality yet very practical and simple solution to this problem.

And honestly, they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress every JPEG file to within an inch of its life, having that capability helped enable the modern internet.

(The book, which both tries to explain the way JPEG works for the layperson and through in-depth mathematical equations, is on the Internet Archive for one-hour checkout, by the way, but its layout is completely messed up, sadly.)

As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. One of the first patents filed by Mitchell and Pennebaker around image compression, filed in 1988 and granted in 1990, described an “apparatus and method for compressing and de-compressing binary decision data by arithmetic coding and decoding wherein the estimated probability Qe of the less probable of the two decision events, or outcomes, adapts as decisions are successively encoded.” Another, also tied to Pennebaker and Mitchell, described an “apparatus and method for adapting the estimated probability of either the less likely or more likely outcome (event) of a binary decision in a sequence of binary decisions involves the updating of the estimated probability in response to the renormalization of an augend A.”

That likely reads like gibberish to you, but essentially, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings.

At 85% quality, it is down to about 336k, which means that dropping just 15% of quality saved us two thirds of the file size.

Each brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has more than stood the test of time.

All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2, which supported the underlying technology of JPEG as early as 1990 through the OS/2 Image Support application. (The announcement of the support went under the radar, being announced as “Image compression and decompression capability for color and gray images in addition to bilevel images,” but Pennebaker and Mitchell make clear in their book that this coding appeared in OS/2 Image Support first.)

Hearing that there was a “first application” associated with JPEG brought me down a rabbit hole. I did a long search for this application yesterday, trying to find as much info as possible about it. My process involved setting up an OS/2 VM and a modern web browser, so I could run any OS/2 applications related to this.

But it was all for naught, but it did lead to an entertaining Mastodon thread. Unfortunately, what I thought would bring me a step closer to an application led me to a text file describing the application.

Any IBM employees with a copy of OS/2 Image Support lying around? You’re holding the starting point of modern-day computerized photography.

 
 

“The purpose of image compression is to represent images with less data in order to save storage cost or transmission time and costs. Obviously, the less data required to represent the image, the better, provided there is no penalty in obtaining a greater reduction. However, the most effective compression is achieved by approximating the original image (rather than reproducing it exactly), and the greater the compression, the more approximate (‘lossy’) the rendition is likely to be.”

— A description of the goals of the JPEG format, according to JPEG: Still Image Data Compression Standard. In many ways, the JPEG was intended to be a format that could be perfect when it needed to be, but good enough when the circumstances didn’t allow for perfection.

 
 

That same forest, saved at 65%, using a progressive load. Down to about 200k. This will load faster. However progressive images load so fast now that you may not even notice the progressive load unless you’re on a slow internet connection or a slow computer.

What a JPEG does when you heavily compress it

The thing that differentiates a JPEG file from a PNG or a GIF is the nature of its compression. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. The idea is to make it so that you can display something that looks close to the original image in fewer bytes.

Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing extreme details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result.

The algorithm, developed by researchers Nasir Ahmed, T. Natarajan, and K. R. Rao in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data comes out like a faucet, or like a volume control. The more data you want, the higher the setting. Essentially, DCT allows a trickle of data to still come out even in highly compromised situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it.

That is dumbed down significantly, because we are not a technical publication. However, if you want a more technical but still somewhat easy-to-follow description of DCT, I recommend this clip from Computerphile, featuring a description of compression from computer imaging researcher Mike Mound, who uses the wales on the jumper he’s wearing to break down how cosine transform functions.

DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you are witnessing DCT being utilized in real time.

A JPEG file doesn’t have to leverage the DCT in just one way, as JPEG: Still Image Data Compression Standard explains:

The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements.

The toolkit has four modes, which work in these ways:

  • Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down
  • Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in
  • Sequential lossless, which uses the window shade format but doesn’t compress the image
  • Hierarchial mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result

At the time the JPEG was being created, modems were extremely common, and that meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop.

Down to 30%. About 120k. Still looks like a photo!

When an image is compressed with DCT, it tends to be less noticeable in areas of the image where there’s a lot of activity going on. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—you know, like text on a page. (Which is why if you have a picture of text, you shouldn’t share it in a JPG format unless it is high resolution or you can live with the degradation.)

Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Notably, PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in more ways than one before his untimely passing. How is there not a dramatic movie about Phil Katz?)

In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way.

 
 

NO

The answer to the question, “Did NCSA Mosaic initially support inline JPEG files?” It’s surprising today, given the absolute ubiquity of the JPG format, but the browser that started the visual internet did not initially support JPG files without the use of an external reader. (It supported inline GIF files, however, along with the largely forgotten XBitMap format.) Support came in 1995, but by that point, Netscape Navigator had come out—explicitly promoting its offering of inline JPEG support as a marquee feature.

 
 

That same forest, at 15%. We are now down to 71k.

How a so-called patent troll was able to make bank off the JPEG in the early 2000s

If you’re a patent holder, the best kind of patent to hold is one that has been largely forgotten about, but is the linchpin of a common piece of technology already used by millions of people.

This is arguably what happened in 1986, when Compression Labs employees Wen-Hsiung Chen and Daniel J. Klenke filed what became U.S. patent 4,698,672, “Coding system for reducing redundancy,” which dealt with a way to improve signal processing for motion graphics, so they took up less space in distribution. This arguably overlapped with what the JPEG format was doing. They had created a ticking time bomb for the computer industry. Someone just needed to find it.

And find it they did. In 1997, a company named Forgent Networks acquired Compression Labs, and in 2002, Forgent claimed this patent effectively gave them partial ownership of the JPEG format in various settings, including digital cameras. They started filing patent lawsuits—and winning, big.

"The patent, in some respects, is a lottery ticket," Forgent Chief Financial Officer Jay Peterson told CNET in 2005. "If you told me five years ago that 'You have the patent for JPEG,' I wouldn't have believed it."

Now, if this situation sounds familiar to you, it’s because a better-known company, Unisys, had done this exact same thing nearly a decade prior, except with the GIF format. The company began threatening CompuServe and others at a time when the GIF was the internet’s favorite file format. Unisys apparently had no qualms with being unpopular with internet users of the era, and charged website owners $5,000 to use GIFs. Admittedly, that company had a more cut-and-dry case for doing so, as the firm directly owned the Lempel–Ziv–Welch (LZW) compression format that GIFs used, as it was created by employees of its predecessor company, Sperry. (This led to the creation of the patent-free PNG format in 1995.)

We’re now at 7%—and just over 30k. We are now 1/33rd the size of the file at the top of the document. Check out the color degradation on this one.

But Forgent, despite having a far more tenuous claim on its rights ownership to the JPEG compression algorithm, was nonetheless much more successful in drawing money from patent lawsuits against JPEG users, earning more than $100 million from digital camera makers during the early 2000s before the patent finally ran out of steam around 2007. The company also attempted to convince PC makers to give them a billion dollars, before being talked down to a mere $8 million.

In the process of trying to squeeze cash from an old patent, their claims grew increasingly controversial. Eventually, the patent was narrowed in scope to only motion-based uses, i.e. video. On top of that, evidence of prior art was uncovered because patent troll critics were understandably pissed off when Forgent started suing in 2004.

(The company tried expanding its patent-trolly horizons during this period. It began threatening DVR-makers over a separate patent that described recording TV shows to a computer.)

Forgent Networks no longer exists under that name. In 2007, just as the compression patent expired, the company renamed itself to Asure Software, which specializes in payroll and HR solutions. They used their money to get out of the patent-trolling game, which I guess is somewhat noble.

 
 

200M

The estimated number of images that the Library of Congress has inventoried in JPEG 2000 format, a successor standard to the JPEG first released in 2001. The flexible update of the original JPEG format added better compression performance, but required more computational power. The original JPEG format is much more popular, but JPEG 2000 has found success in numerous niches.

 
 

The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. However, it is less an end-user format and more specialized.

JP2s are harder to find on the open web—one of the few places online that I see them happens to be the Internet Archive. (Which means the Internet Archive served images from that JPEG book in JP2 format.)

Our forest, saved at 1% quality. So much of the detail has been removed, yet you can still tell what it is. This image is only about 15k in size. That’s the power of the JPG.

Other image technologies have had somewhat more luck getting past the JPG format, with the Google-supported WebP format proving popular with website developers (if controversial for the folks actually saving images), and the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000.

The JPEG will be difficult to kill at this juncture. These days, the format is similar to the MP3 file or the ZIP format—two legacy formats too popular to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start.

Shaking off the JPG is easier said than done. I think most people will be fine to keep it around.

--

Find this one fascinating? Share it with a pal!

And if you’re looking for a tech-news roundup, TLDR is a great choice. Give ’em a look!



Read the whole story
digdoug
1 day ago
reply
I do love a good nerdy deep dive on things I cared about 30 years ago.
Louisville, KY
Share this story
Delete

I am Jim Henson / The Playful Eye

2 Comments

Books That Belong On Paper first appeared on the web as Wink Books and was edited by Carla Sinclair. Sign up here to get the issues a week early in your inbox.


A CHILDREN’S BOOK ABOUT THE LIFE OF JIM HENSON

I am Jim Henson (Ordinary People Change the World)
by Brad Meltzer, Christopher Eliopoulos (Illustrator)
Dial Books
2017, 40 pages, 7.8 x 0.4 x 7.8 inches, Hardcover

Buy on Amazon

If you grew up at a certain time there were people that were icons. Way past the rank of celebrity, bigger than characters, they were men and women whose beings and creations were intertwined into the very fabric of the things we loved to watch, read, and do. And if you were anything like me, one of those people was Jim Henson. From Sesame Street to the Muppets to (especially for me) Labyrinth, his creations and those that he curated and inspired weaved themselves deeply into the pop culture interests of kids all over the world. They were like the air, they just existed around us and we felt like it was part of the natural order of things.

But in the end Jim Henson was just a person, just an ordinary human who started life simply and lived his life from there. Along the way, however, he changed the world with a piece of cloth he took from his mother’s coat and a ping pong ball.

Creating Kermit the Frog is just one of the stories that you’ll find in I Am Jim Henson, a great entry in the ongoing series “Ordinary People Change the World” by Brad Meltzer and Christopher Eliopoulos (this is one of their more recent releases but the series covers important and fascinating figures like Rosa Parks, Jane Goodall, and George Washington among many others). If you’re familiar with Eliopoulos’ work then you know that you’re in for a visual treat and you won’t be let down. I’ve found his Bill Watterson-inspired art a treat for years now and this series is the perfect showcase for it. It’s cute, it’s funny, and both kids and adults will love it.

I’ve honestly read very little Meltzer as I’m not a big fan of thriller novels or the comic book work of his that I’ve dipped my toes into, but his approach here is fascinating and really resonates with the reader. Meltzer uses Henson as his narrator, even going so far as to use many actual quotes by Henson as dialogue, and as a narrator he’s not telling the reader about his life, he’s telling the reader a story about his life. The distinction is important as the book becomes a testament to storytelling, enriching perhaps the greatest accomplishment that Henson and his co-creators (many of whom are characters in the book) ever made: Using impersonal and inanimate objects to create lively stories that could make a viewer laugh, cry, or think without spending a single moment thinking about the fact that a piece of cloth and a pair of hands was making it happen. Meltzer pulls the same trick in this book, turning an autobiographical book into a parable about the power of storytelling. Not a bad bit of slight-of-hand for something intended to be read by (or to) rugrats.

The book also hones in on the concept of “goodness” that was a hallmark of Henson & Co.’s work and Meltzer builds up to it carefully throughout the story, making the impact ring soundly. Henson believed that this goodness was the key ingredient to his work and that comedy didn’t have to be mean to be funny and you don’t have to be funny by being mean. His approach was validated by the immense popularity of the work he was a part of and while it’s not, of course, the only approach, it put Sesame Street and the Muppets in the heart of millions and millions of people.

And that’s a story worth telling.

– Rob Trevino


THE PLAYFUL EYE IS A VIRTUAL FEAST OF GAMES AND VISUAL TRICKS GATHERED FROM AROUND THE WORLD

The Playful Eye: An Album of Visual Delight
by Julian Rothenstein, Mel Gooding
Chronicle Books
2000, 112 pages, 9.9 x 0.5 x 12.7 inches, Paperback

Buy on Amazon

These vintage cards and old placards display optical illusions, visual witticisms, hidden images, rebuses, and artistic paradoxes from yesteryear. They were the equivalent of Gifs back then — eye candy worth sharing. Here they are gathered in a oversized paperback for your entertainment and amazement.

– Kevin Kelly

Read the whole story
digdoug
23 days ago
reply
I've given away a few copies of that first book.
It's a good one.
Louisville, KY
Share this story
Delete
1 public comment
GaryBIshop
24 days ago
reply
Sounds great!

The Lunacy of Artemis

1 Comment and 2 Shares
In August 2020, the New York Times asked me to write an op-ed for a special feature on authoritarianism and democracy. They declined to publish my submission, which I am sharing here instead.

distant photo of Artemis rocket on launch pad

A little over 51 years ago, a rocket lifted off from Cape Canaveral carrying three astronauts and a space car. After a three day journey to the moon, two of the astronauts climbed into a spindly lander and made the short trip down to the surface, where for another three days they collected rocks and did donuts in the space car. Then they climbed back into the lander, rejoined their colleague in orbit, and departed for Earth. Their capsule splashed down in the South Pacific on December 19, 1972. This mission, Apollo 17, would be the last time human beings ventured beyond low Earth orbit.

If you believe NASA, late in 2026 Americans will walk on the moon again. That proposed mission is called Artemis 3, and its lunar segment looks a lot like Apollo 17 without the space car. Two astronauts will land on the moon, collect rocks, take selfies, and about a week after landing rejoin their orbiting colleagues to go back to Earth.

But where Apollo 17 launched on a single rocket and cost $3.3 billion (in 2023 dollars), the first Artemis landing involves a dozen or two heavy rocket launches and costs so much that NASA refuses to give a figure (one veteran of NASA budgeting estimates it at $7-10 billion).[1] The single-use lander for the mission will be the heaviest spacecraft ever flown, and yet the mission's scientific return—a small box of rocks—is less than what came home on Apollo 17. And the whole plan hinges on technologies that haven't been invented yet becoming reliable and practical within the next eighteen months.

You don’t have to be a rocket scientist to wonder what’s going on here. If we can put a man on the moon, then why can't we just go do it again? The moon hasn’t changed since the 1960’s, while every technology we used to get there has seen staggering advances. It took NASA eight years to go from nothing to a moon landing at the dawn of the Space Age. But today, twenty years and $93 billion after the space agency announced our return to the moon, the goal seems as far out of reach as ever.[2]

Articles about Artemis often give the program’s tangled backstory. But I want to talk about Artemis as a technical design, because there’s just so much to drink in. While NASA is no stranger to complex mission architectures, Artemis goes beyond complex to the just plain incoherent. None of the puzzle pieces seem to come from the same box. Half the program requires breakthrough technologies that make the other half unnecessary. The rocket and spacecraft NASA spent two decades building can’t even reach the moon. And for reasons no one understands, there’s a new space station in the mix.

In the past, whatever oddball project NASA came up with, we at least knew they could build the hardware. But Artemis calls the agency’s competence as an engineering organization into question. For the first time since the early 1960's, it's unclear whether the US space agency is even capable of putting astronauts on the Moon.

Photograph of SLS rocket

A Note on Apollo

In this essay I make a lot of comparisons to Project Apollo. This is not because I think other mission architectures are inferior, but because the early success of that program sets such a useful baseline. At the dawn of the Space Age, using rudimentary technology, American astronauts landed on the moon six times in seven attempts. The moon landings were NASA’s greatest achievement and should set a floor for what a modern mission, flying modern hardware, might achieve.

Advocates for Artemis insist that the program is more than Apollo 2.0. But as we’ll see, Artemis can't even measure up to Apollo 1.0. It costs more, does less, flies less frequently, and exposes crews to risks that the steely-eyed missile men of the Apollo era found unacceptable. It's as if Ford in 2024 released a new model car that was slower, more accident-prone, and ten times more expensive than the Model T.

When a next-generation lunar program can’t meet the cost, performance, or safety standards set three generations earlier, something has gone seriously awry.

Photograph of SLS rocket

I. The Rocket

The jewel of Artemis is a big orange rocket with a flavorless name, the Space Launch System (SLS). SLS looks like someone started building a Space Shuttle and ran out of legos for the orbiter. There is the familiar orange tank, a big white pair of solid rocket boosters, but then the rocket just peters out in a 1960’s style stack of cones and cylinders.

The best way to think of SLS is as a balding guy with a mullet: there are fireworks down below that are meant to distract you from a sad situation up top. In the case of the rocket, those fireworks are a first stage with more thrust than the Saturn V, enough thrust that the boosted core stage can nearly put itself into orbit. But on top of this monster sits a second stage so anemic that even its name (the Interim Cryogenic Propulsion Stage) is a kind of apology. For eight minutes SLS roars into the sky on a pillar of fire. And then, like a cork popping out of a bottle, the tiny ICPS emerges and drifts vaguely moonwards on a wisp of flame.

With this design, the minds behind SLS achieved a first in space flight, creating a rocket that is at the same time more powerful and less capable than the Saturn V. While the 1960’s giant could send 49 metric tons to the Moon, SLS only manages 27 tons—not enough to fly an Apollo-style landing, not enough to even put a crew in orbit around the Moon without a lander. The best SLS can do is slingshot the Orion spacecraft once around the moon and back, a mission that will fly under the name Artemis 2.

NASA wants to replace ICPS with an ‘Exploration Upper Stage’ (the project has been held up, among other things, by a near-billion dollar cost overrun on a launch pad). But even that upgrade won’t give SLS the power of the Saturn V. For whatever reason, NASA designed its first heavy launcher in forty years to be unable to fly the simple, proven architecture of the Apollo missions.

Of course, plenty of rockets go on to enjoy rewarding, productive careers without being as powerful as the Saturn V. And if SLS rockets were piling up at the Michoud Assembly Facility like cordwood, or if NASA were willing to let its astronauts fly commercial, it would be a simple matter to split Artemis missions across multiple launches.

But NASA insists that astronauts fly SLS. And SLS is a “one and done” rocket, artisanally hand-crafted by a workforce that likes to get home before traffic gets bad. The rocket can only launch once every two years at a cost of about four billion dollars[3]—about twice what it would cost to light the rocket’s weight in dollar bills on fire[4].

Early on, SLS designers made the catastrophic decision to reuse Shuttle hardware, which is like using Fabergé eggs to save money on an omelette. The SLS core stage recycles Space Shuttle main engines, actual veterans of old Shuttle flights called out of retirement for one last job. Refurbishing a single such engine to work on SLS costs NASA $40 million, or a bit more than SpaceX spends on all 33 engines on its Superheavy booster.[5] And though the Shuttle engines are designed to be fully reusable (the main reason they're so expensive), every SLS launch throws four of them away. Once all the junkyards are picked clean, NASA will pay Aerojet Rocketdyne to restart production of the classic engine at a cool unit cost of $145 million[6].

The story is no better with the solid rocket boosters, the other piece of Shuttle hardware SLS reuses. Originally a stopgap measure introduced to save the Shuttle budget, these heavy rockets now attach themselves like barnacles to every new NASA launcher design. To no one’s surprise, retrofitting a bunch of heavy steel casings left over from Shuttle days has saved the program nothing. Each SLS booster is now projected to cost $266 million, or about twice the launch cost of a Falcon Heavy.[7] Just replacing the asbestos lining in the boosters with a greener material, a project budgeted at $4.4M, has now cost NASA a quarter of a billion dollars. And once the leftover segments run out seven rockets from now, SLS will need a brand new booster design, opening up fertile new vistas of overspending.

Costs on SLS have reached the point where private industry is now able to develop, test, and launch an entire rocket program for less than NASA spends on a single engine[8]. Flying SLS is like owning a classic car—everything is hand built, the components cost a fortune, and when you finally get the thing out of the shop, you find yourself constantly overtaken by younger rivals.

But the cost of SLS to NASA goes beyond money. The agency has committed to an antiquated frankenrocket just as the space industry is entering a period of unprecedented innovation. While other space programs get to romp and play with technologies like reusable stages and exotic alloys, NASA is stuck for years wasting a massive, skilled workforce on a dead-end design.

The SLS program's slow pace also affects safety. Back in the Shuttle era, NASA managers argued that it took three to four launches a year to keep workers proficient enough to build and launch the vehicles safely. A boutique approach where workers hand-craft one rocket every two years means having to re-learn processes and procedures with every launch.

It also leaves no room in Artemis for test flights. The program simply assumes success, flying all its important 'firsts' with astronauts on board. When there are unanticipated failures, like the extensive heat shield spalling and near burn-through observed in Artemis 1,[9] the agency has no way to test a proposed fix without a multi-year delay to the program. So they end up using indirect means to convince themselves that a new design is safe to fly, a process ripe for error and self-delusion.

Orion space capsule with OVERSIZE LOAD banner

II. The Spacecraft

Orion, the capsule that launches on top of SLS, is a relaxed-fit reimagining of the Apollo command module suitable for today’s larger astronaut. It boasts modern computers, half again as much volume as the 1960’s design, and a few creature comforts (like not having to poop in a baggie) that would have pleased the Apollo pioneers.

The capsule’s official name is the Orion Multipurpose Crew Vehicle, but finding even a single purpose for Orion has greatly challenged NASA. For twenty years the spacecraft has mostly sat on the ground, chewing through a $1.2 billion annual budget. In 2014, the first Orion flew a brief test flight. Eight short years later, Orion launched again, carrying a crew of instrumented mannequins around the Moon on Artemis 1. In 2025 the capsule (by then old enough to drink) is supposed to fly human passengers on Artemis 2.

Orion goes to space attached to a basket of amenities called the European Service Module. The ESM provides Orion with solar panels, breathing gas, batteries, and a small rocket that is the capsule’s principal means of propulsion. But because the ESM was never designed to go to the moon, it carries very little propellant—far too little to get the hefty capsule in and out of lunar orbit.[10]

And Orion is hefty. Originally designed to hold six astronauts, the capsule was never resized when the crew requirement shrank to four. Like an empty nester’s minivan, Orion now hauls around a bunch of mass and volume that it doesn’t need. Even with all the savings that come from replacing Apollo-era avionics, the capsule weighs almost twice as much as the Apollo Command Module.

This extra mass has knock-on effects across the entire Artemis design. Since a large capsule needs a large abort rocket, SLS has to haul Orion's massive Launch Abort System—seven tons of dead weight—nearly all the way into orbit. And reinforcing the capsule so that abort system won't shake the astronauts into jelly means making it heavier, which puts more demand on the parachutes and heat shield,[11] and around and around we go.

Orion space capsule with OVERSIZE LOAD banner

Size comparison of the Apollo command and service module (left) and Orion + European Service Module (right)

What’s particularly frustrating is that Orion and ESM together have nearly the same mass as the Apollo command and service modules, which had no trouble reaching the Moon. The difference is all in the proportions. Where Apollo was built like a roadster, with a small crew compartment bolted onto an oversized engine, Orion is the Dodge Journey of spacecraft—a chunky, underpowered six-seater that advertises to the world that you're terrible at managing money.

diagram of near-rectilinear halo orbit

III. The Orbit

The fact that neither its rocket or spaceship can get to the Moon creates difficulties for NASA’s lunar program. So, like an aging crooner transposing old hits into an easier key, the agency has worked to find a ‘lunar-adjacent’ destination that its hardware can get to.

Their solution is a bit of celestial arcana called Near Rectilinear Halo Orbit, or NRHO. A spacecraft in this orbit circles the moon every 6.5 days, passing 1,000 kilometers above the lunar north pole at closest approach, then drifting out about 70,000 kilometers (a fifth of the Earth/Moon distance) at its furthest point. Getting to NRHO from Earth requires significantly less energy than entering a useful lunar orbit, putting it just within reach for SLS and Orion.[12]

To hear NASA tell it, NRHO is so full of advantages that it’s a wonder we stay on Earth. Spacecraft in the orbit always have a sightline to Earth and never pass through its shadow. The orbit is relatively stable, so a spacecraft can loiter there for months using only ion thrusters. And the deep space environment is the perfect place to practice going to Mars.

But NRHO is terrible for getting to the moon. The orbit is like one of those European budget airports that leaves you out in a field somewhere, requiring an expensive taxi. In Artemis, this taxi takes the form of a whole other spaceship—the lunar lander—which launches without a crew a month or two before Orion and is supposed to be waiting in NRHO when the capsule arrives.

Once these two spacecraft dock together, two astronauts climb into the lander from Orion and begin a day-long descent to the lunar surface. The other two astronauts wait for them in NRHO, playing hearts and quietly absorbing radiation.

Apollo landings also divided the crew between lander and orbiter. But those missions kept the command module in a low lunar orbit that brought it over the landing site every two hours. This proximity between orbiter and lander had enormous implications for safety. At any point in the surface mission, the astronauts on the moon could climb into the ascent rocket, hit the big red button, and be back sipping Tang with the command module pilot by bedtime. The short orbital period also gave the combined crew a dozen opportunities a day to return directly to Earth. [13]

Sitting in NRHO makes abort scenarios much harder. Depending on when in the mission it happens, a stricken lander might need three or more days to catch up with the orbiting Orion. In the worst case, the crew might find themselves stuck on the lunar surface for hours after an abort is called, forced to wait for Orion to reach a more favorable point in its orbit. And once everyone is back on Orion, more days might pass before the crew can depart for Earth. These long and variable abort times significantly increase risk to the crew, making many scenarios that were survivable on Apollo (like Apollo 13!) lethal on Artemis. [14]

The abort issue is just one example of NRHO making missions slower. NASA likes to boast that Orion can stay in space far longer than Apollo, but this is like bragging that you’re in the best shape of your life after the bank repossessed your car. It's an oddly positive spin to put on bad life choices. The reason Orion needs all that endurance is because transit times from Earth to NRHO are long, and the crew has to waste additional time in NRHO waiting for orbits to line up. The Artemis 3 mission, for example, will spend 24 days in transit, compared to just 6 days on Apollo 11.

NRHO even dictates how long astronauts stay on the Moon—surface time has to be a multiple of the 6.5 day orbital period. This lack of flexibility means that even early flag-and-footprints missions like Artemis 3 have to spend at least a week on the moon, a constraint that adds considerable risk to the initial landing. [15]

In spaceflight, brevity is safety. There's no better way to protect astronauts from the risks of solar storms, mechanical failure, and other mishaps than by minimizing slack time in space. Moreover, a safe architecture should allow for a rapid return to Earth at any point in the mission. There’s no question astronauts on the first Artemis missions would be better off with Orion in low lunar orbit. The decision to stage from NRHO is an excellent example of NASA designing its lunar program in the wrong direction—letting deficiencies in the hardware dictate the level of mission risk. 

diagram of Gateway

Early diagram of Gateway. Note that the segment marked 'human lander system' now dwarfs the space station.

IV. Gateway

I suppose at some point we have to talk about Gateway. Gateway is a small modular space station that NASA wants to build in NRHO. It has been showing up across various missions like a bad smell since before 2012.

Early in the Artemis program, NASA described Gateway as a kind of celestial truck stop, a safe place for the lander to park and for the crew to grab a cup of coffee on their way to the moon. But when it became clear that Gateway would not be ready in time for Artemis 3, NASA re-evaluated. Reasoning that two spacecraft could meet up in NRHO just as easily as three, the agency gave permission for the first moon landing to proceed without a space station.

Despite this open admission that Gateway is unnecessary, building the space station remains the core activity of the Artemis program. The three missions that follow that first landing are devoted chiefly to Gateway assembly. In fact, initial plans for Artemis 4 left out a lunar landing entirely, as if it were an inconvenience to the real work being done up in orbit.

This is a remarkable situation. It’s like if you hired someone to redo your kitchen and they started building a boat in your driveway. Sure, the boat gives the builders a place to relax, lets them practice tricky plumbing and finishing work, and is a safe place to store their tools. But all those arguments will fail to satisfy. You still want to know what building a boat has to do with kitchen repair, and why you’re the one footing the bill.

NASA has struggled to lay out a technical rationale for Gateway. The space station adds both cost and complexity to Artemis, a program not particularly lacking in either. Requiring moon-bound astronauts to stop at Gateway also makes missions riskier (by adding docking operations) while imposing a big propellant tax. Aerospace engineer and pundit Robert Zubrin has aptly called the station a tollbooth in space.

Even Gateway defenders struggle to hype up the station. A common argument is that Gateway may not ideal for any one thing, but is good for a whole lot of things. But that is the same line of thinking that got us SLS and Orion, both vehicles designed before anyone knew what to do with them. The truth is that all-purpose designs don't exist in human space flight. The best you can do is build a spacecraft that is equally bad at everything.

But to search for technical grounds is to misunderstand the purpose of Gateway. The station is not being built to shelter astronauts in the harsh environment of space, but to protect Artemis in the harsh environment of Congress. NASA needs Gateway to navigate an uncertain political landscape in the 2030’s. Without a station, Artemis will just be a series of infrequent multibillion dollar moon landings, a red cape waved in the face of the Office of Management and Budget. Gateway armors Artemis by bringing in international partners, each of whom contributes expensive hardware. As NASA learned building the International Space Station, this combination of sunk costs and international entanglement is a powerful talisman against program death.

Gateway also solves some other problems for NASA. It gives SLS a destination to fly to, stimulates private industry (by handing out public money to supply Gateway), creates a job for the astronaut corps, and guarantees the continuity of human space flight once the ISS becomes uninhabitable sometime in the 2030’s. [16]

That last goal may sound odd if you don’t see human space flight as an end in itself. But NASA is a faith-based organization, dedicated to the principle that taxpayers should always keep an American or two in orbit. it’s a little bit as if the National Oceanic Atmospheric Administration insisted on keeping bathyscapes full of sailors at the bottom of the sea, irrespective of cost or merit, and kneecapped programs that might threaten the continuous human benthic presence. You can’t argue with faith.

From a bureaucrat’s perspective, Gateway is NASA’s ticket back to a golden era in the early 2000's when the Space Station and Space Shuttle formed an uncancellable whole, each program justifying the existence of the other. Recreating this dynamic with Gateway and SLS/Orion would mean predictable budgets and program stability for NASA well into the 2050’s.

But Artemis was supposed to take us back to a different golden age, the golden age of Apollo. And so there’s an unresolved tension in the program between building Gateway and doing interesting things on the moon. With Artemis missions two or more years apart, it’s inevitable that Gateway assembly will push aspirational projects like a surface habitat or pressurized rover out into the 2040’s. But those same projects are on the critical path to Mars, where NASA still insists we’re going in the late 2030’s. The situation is awkward.

So that is the story of Gateway—unloved, ineradicable, and as we’ll see, likely to become the sole legacy of the Artemis program. 

artist's rendering of human landing system'

V. The Lander

The lunar lander is the most technically ambitious part of Artemis. Where SLS, Orion, and Gateway are mostly a compilation of NASA's greatest hits, the lander requires breakthrough technologies with the potential to revolutionize space travel.

Of course, you can’t just call it a lander. In Artemis speak, this spacecraft is the Human Landing System, or HLS. NASA has delegated its design to two private companies, Blue Origin and SpaceX. SpaceX is responsible for landing astronauts on Artemis 3 and 4, while Blue Origin is on the hook for Artemis 5 (notionally scheduled for 2030). After that, the agency will take competitive bids for subsequent missions.

The SpaceX HLS design is based on their experimental Starship spacecraft, an enormous rocket that takes off on and lands on its tail, like 1950’s sci-fi. There is a strong “emperor’s new clothes” vibe to this design. On the one hand, it is the brainchild of brilliant SpaceX engineers and passed NASA technical review. On the other hand, the lander seems to go out of its way to create problems for itself to solve with technology.

artist's rendering of human landing system'

An early SpaceX rendering of the Human Landing System, with the Apollo Lunar Module added for scale.

To start with the obvious, HLS looks more likely to tip over than the last two spacecraft to land on the moon, which tipped over. It is a fifteen story tower that must land on its ass in terrible lighting conditions, on rubble of unknown composition, over a light-second from Earth. The crew are left suspended so high above the surface that they need a folding space elevator (not the cool kind) to get down. And yet in the end this single-use lander carries less payload (both up and down) than the tiny Lunar Module on Apollo 17. Using Starship to land two astronauts on the moon is like delivering a pizza with an aircraft carrier.

Amusingly, the sheer size of the SpaceX design leaves it with little room for cargo. The spacecraft arrives on the Moon laden with something like 200 tons of cryogenic propellant,[14] and like a fat man leaving an armchair, it needs every drop of that energy to get its bulk back off the surface. Nor does it help matters that all this cryogenic propellant has to cook for a week in direct sunlight.

Other, less daring lander designs reduce their appetite for propellant by using a detachable landing stage. This arrangement also shields the ascent rocket from hypervelocity debris that gets kicked up during landing. But HLS is a one-piece rocket; the same engines that get sandblasted on their way down to the moon must relight without fail a week later.

Given this fact, it’s remarkable that NASA’s contract with SpaceX doesn’t require them to demonstrate a lunar takeoff. All SpaceX has to do to satisfy NASA requirements is land an HLS prototype on the Moon. Questions about ascent can then presumably wait until the actual mission, when we all find out together with the crew whether HLS can take off again.[15]

This fearlessness in design is part of a pattern with Starship HLS. Problems that other landers avoid in the design phase are solved with engineering. And it’s kind of understandable why SpaceX does it this way. Starship is meant to fly to Mars, a much bigger challenge than landing two people on the Moon. If the basic Starship design can’t handle a lunar landing, it would throw the company’s whole Mars plan into question. SpaceX is committed to making Starship work, which is different from making the best possible lunar lander.

Less obvious is why NASA tolerates all this complexity in the most hazardous phase of its first moon mission. Why land a rocket the size of a building packed with moving parts? It’s hard to look at the HLS design and not think back to other times when a room full of smart NASA people talked themselves into taking major risks because the alternative was not getting to fly at all.

It’s instructive to compare the HLS approach to the design philosophy on Apollo. Engineers on that progam were motivated by terror; no one wanted to make the mistake that would leave astronauts stranded on the moon. The weapon they used to knock down risk was simplicity. The Lunar Module was a small metal box with a wide stance, built low enough so that the astronauts only needed to climb down a short ladder. The bottom half of the LM was a descent stage that completely covered the ascent rocket (a design that showed its value on Apollo 15, when one of the descent engines got smushed by a rock). And that ascent rocket, the most important piece of hardware in the lander, was a caveman design intentionally made so primitive that it would struggle to find ways to fail.

On Artemis, it's the other way around: the more hazardous the mission phase, the more complex the hardware. It's hard to look at all this lunar machinery and feel reassured, especially when NASA's own Aerospace Safety Advisory Panel estimates that the Orion/SLS portion of a moon mission alone (not including anything to do with HLS) already has a 1:75 chance of killing the crew.

artist's rendering of human landing system'

VI. Refueling

Since NASA’s biggest rocket struggles to get Orion into distant lunar orbit, and HLS weighs fifty times as much as Orion, the curious reader might wonder how the unmanned lander is supposed to get up there.

NASA’s answer is, very sensibly, “not our problem”. They are paying Blue Origin and SpaceX the big bucks to figure this out on their own. And as a practical matter, the only way to put such a massive spacecraft into NRHO is to first refuel it in low Earth orbit.

Like a lot of space technology, orbital refueling sounds simple, has never been attempted, and can’t be adequately simulated on Earth.[18] The crux of the problem is that liquid and gas phases in microgravity jumble up into a three-dimensional mess, so that even measuring the quantity of propellant in a tank becomes difficult. To make matters harder, Starship uses cryogenic propellants that boil at temperatures about a hundred degrees colder than the plumbing they need to move through. Imagine trying to pour water from a thermos into a red-hot skillet while falling off a cliff and you get some idea of the difficulties.

To get refueling working, SpaceX will first have to demonstrate propellant transfer between rockets as a proof of concept, and then get the process working reliably and efficiently at a scale of hundreds of tons. (These are two distinct challenges). Once they can routinely move liquid oxygen and methane from Starship A to Starship B, they’ll be ready to set up the infrastructure they need to launch HLS.

artist's rendering of human landing system'

The plan for getting HLS to the moon looks like this: a few months before the landing date, SpaceX will launch a special variant of their Starship rocket configured to serve as a propellant depot. Then they'll start launching Starships one by one to fill it up. Each Starship arrives in low Earth orbit with some residual propellant; it will need to dock with the depot rocket and transfer over this remnant fuel. Once the depot is full, SpaceX will launch HLS, have it fill its tanks at the depot rocket, and send it up to NRHO in advance of Orion. When Orion arrives, HLS will hopefully have enough propellant left on board to take on astronauts and make a single round trip from NRHO to the lunar surface.

Getting this plan to work requires solving a second engineering problem, how to keep cryogenic propellants cold in space. Low earth orbit is a toasty place, and without special measures, the cryogenic propellants Starship uses will quickly vent off into space. The problem is easy to solve in deep space (use a sunshade), but becomes tricky in low Earth orbit, where a warm rock covers a third of the sky. (Boil-off is also a big issue for HLS on the moon.)

It’s not clear how many Starship launches it will take to refuel HLS. Elon Musk has said four launches might be enough; NASA Assistant Deputy Associate Administrator Lakiesha Hawkins says the number is in the “high teens”. Last week, SpaceX's Kathy Lueders gave a figure of fifteen launches.

The real number is unknown and will come down to four factors:

  1. How much propellant a Starship can carry to low Earth orbit.
  2. What fraction of that can be usably pumped out of the rocket.
  3. How quickly cryogenic propellant boils away from the orbiting depot.
  4. How rapidly SpaceX can launch Starships.

SpaceX probably knows the answer to (1), but isn’t talking. Data for (2) and (3) will have to wait for flight tests that are planned for 2025. And obviously a lot is riding on (4), also called launch cadence.

The record for heavy rocket launch cadence belongs to Saturn V, which launched three times during a four month period in 1968. Second place belongs to the Space Shuttle, which flew nine times in the calendar year before the Challenger disaster. In third place is Falcon Heavy, which flew six times in a 13 month period beginning in November 2022.

For the refueling plan to work, Starship will have to break this record by a factor of ten, launching every six days or so across multiple launch facilities. [1] The refueling program can tolerate a few launch failures, as long as none of them damages a launch pad.

There’s no company better prepared to meet this challenge than SpaceX. Their Falcon 9 rocket has shattered records for both reliability and cadence, and now launches about once every three days. But it took SpaceX ten years to get from the first orbital Falcon 9 flight to a weekly cadence, and Starship is vastly bigger and more complicated than the Falcon 9. [20]

Working backwards from the official schedule allows us to appreciate the time pressure facing SpaceX. To make the official Artemis landing date, SpaceX has to land an unmanned HLS prototype on the moon in early 2026. That means tanker flights to fill an orbiting depot would start in late 2025. This doesn’t leave a lot of time for the company to invent orbital refueling, get it working at scale, make it efficient, deal with boil-off, get Starship launching reliably, begin recovering booster stages,[21] set up additional launch facilities, achieve a weekly cadence, and at the same time design and test all the other systems that need to go into HLS.

Lest anyone think I’m picking on SpaceX, the development schedule for Blue Origin’s 2029 lander is even more fantastical. That design requires pumping tons of liquid hydrogen between spacecraft in lunar orbit, a challenge perhaps an order of magnitude harder than what SpaceX is attempting. Liquid hydrogen is bulky, boils near absolute zero, and is infamous for its ability to leak through anything (the Shuttle program couldn't get a handle on hydrogen leaks on Earth even after a hundred some launches). And the rocket Blue Origin needs to test all this technology has never left the ground.

The upshot is that NASA has put a pair of last-minute long-shot technology development programs between itself and the moon. Particularly striking is the contrast between the ambition of the HLS designs and the extreme conservatism and glacial pace of SLS/Orion. The same organization that spent 23 years and 20 billion dollars building the world's most vanilla spacecraft demands that SpaceX darken the sky with Starships within four years of signing the initial HLS contract. While thrilling for SpaceX fans, this is pretty unserious behavior from the nation’s space agency, which had several decades' warning that going to the moon would require a lander.

All this to say, it's universally understood that there won’t be a moon landing in 2026. At some point NASA will have to officially slip the schedule, as it did in 2021, 2023, and at the start of this year. If this accelerating pattern of delays continues, by year’s end we might reach a state of continuous postponement, a kind of scheduling singularity where the landing date for Artemis 3 recedes smoothly and continuously into the future.

Otherwise, it's hard to imagine a manned lunar landing before 2030, if the Artemis program survives that long.

Interior of Skylab

VII. Conclusion

I want to stress that there’s nothing wrong with NASA making big bets on technology. Quite the contrary, the audacious HLS contracts may be the healthiest thing about Artemis. Visionaries at NASA identified a futuristic new energy source (space billionaire egos) and found a way to tap it on a fixed-cost basis. If SpaceX or Blue Origin figure out how to make cryogenic refueling practical, it will mean a big step forward for space exploration, exactly the thing NASA should be encouraging. And if the technology doesn’t pan out, we’ll have found that out mostly by spending Musk’s and Bezos’s money.

The real problem with Artemis is that it doesn’t think through the consequences of its own success. A working infrastructure for orbital refueling would make SLS and Orion superfluous. Instead of waiting two years to go up on a $4 billion rocket, crews and cargo could launch every weekend on cheap commercial rockets, refueling in low Earth orbit on their way to the Moon. A similar logic holds for Gateway. Why assemble a space station out of habitrail pieces out in lunar orbit, like an animal, when you can build one on Earth and launch it in one piece? Better yet, just spraypaint “GATEWAY” on the side of the nearest Starship, send it out to NRHO, and save NASA and its international partners billions. Having a working gas station in low Earth orbit fundamentally changes what is possible, in a way the SLS/Orion arm of Artemis doesn't seem to recognize.

Conversely, if SpaceX and Blue Origin can’t make cryogenic refueling work, then NASA has no plan B for landing on the moon. All the Artemis program will be able to do is assemble Gateway. Promising taxpayers the moon only to deliver ISS Jr. does not broadcast a message of national greatness, and is unlikely to get Congress excited about going to Mars. The hurtful comparisons between American dynamism in the 1960’s and whatever it is we have now will practically write themselves.

What NASA is doing is like an office worker blowing half their salary on lottery tickets while putting the other half in a pension fund. If the lottery money comes through, then there was really no need for the pension fund. But without the lottery win, there’s not enough money in the pension account to retire on. The two strategies don't make sense together.

There’s a ‘realist’ school of space flight that concedes all this but asks us to look at the bigger picture. We’re never going to have the perfect space program, the argument goes, but the important thing is forward progress. And Artemis is the first program in years to survive a presidential transition and have a shot at getting us beyond low Earth orbit. With Artemis still funded, and Starship making rapid progress, at some point we’ll finally see American astronauts back on the moon.

But this argument has two flaws. The first is that it feeds a cycle of dysfunction at NASA that is rapidly making it impossible for us to go anywhere. Holding human space flight to a different standard than NASA’s science missions has been a disaster for space exploration. Right now the Exploration Systems Development Mission Directorate (the entity responsible for manned space flight) couldn’t build a toaster for less than a billion dollars. Incompetence, self-dealing, and mismanagement that end careers on the science side of NASA are not just tolerated but rewarded on the human space flight side. Before we let the agency build out its third white elephant project in forty years, it’s worth reflecting on what we're getting in return for half our exploration budget.

The second, more serious flaw in the “realist” approach is that it enables a culture of institutional mendacity that must ultimately be fatal at an engineering organization. We've reached a point where NASA lies constantly, to both itself and to the public. It lies about schedules and capabilities. It lies about the costs and the benefits of its human spaceflight program. And above all, it lies about risk. All the institutional pathologies identified in the Rogers Report and the Columbia Accident Investigation Board are alive and well in Artemis—groupthink, management bloat, intense pressure to meet impossible deadlines, and a willingness to manufacture engineering rationales to justify flying unsafe hardware.

Do we really have to wait for another tragedy, and another beautifully produced Presidential Commission report, to see that Artemis is broken?

Notes

[1] Without NASA's help, it's hard to put a dollar figure on a mission without making somewhat arbitrary decisions about what to include and exclude. The $7-10 billion estimate comes from a Bush-era official in the Office of Management and Budget commenting on the NASA Spaceflight Forum

And that $7.2B assumes Artemis III stays on schedule. Based on the FY24 budget request, each additional year between Artemis II and Artemis III adds another $3.5B to $4.0B in Common Exploration to Artemis III. If Artemis III goes off in 2027, then it will be $10.8B total. If 2028, then $14.3B.

In other words, it's hard to break out an actual cost while the launch dates for both Artemis II and III keep slipping.

NASA's own Inspector General estimates the cost of just the SLS/Orion portion of a moon landing at $4.1 billion.

[2] The first US suborbital flight, Friendship 7, launched on May 15, 1961. Armstrong and Aldrin landed on the moon eight years and two months later, on July 21, 1969. President Bush announced the goal of returning to the Moon in a January 2004 speech, setting the target date for the first landing "as early as 2015", and no later than 2020.

[3] NASA refuses to track the per-launch cost of SLS, so it's easy to get into nerdfights. Since the main cost driver on SLS is the gigantic workforce employed on the project, something like two or three times the headcount of SpaceX, the cost per launch depends a lot on cadence. If you assume a yearly launch rate (the official line), then the rocket costs $2.1 billion a launch. If like me you think one launch every two years is optimistic, the cost climbs up into the $4-5 billion range.

[4] The SLS weighs 2,600 metric tons fully fueled, and conveniently enough a dollar bill weighs about 1 gram.

[5] SpaceX does not disclose the cost, but it's widely assumed the Raptor engine used on Superheavy costs $1 million.

[6] The $145 million figure comes from dividing the contract cost by the number of engines, caveman style. Others have reached a figure of $100 million for the unit cost of these engines. The important point is not who is right but the fact that NASA is paying vastly more than anyone else for engines of this class.

[7] $250M is the figure you get by dividing the $3.2 billion Booster Production and Operations contract to Northrop Grumman by the number of boosters (12) in the contract. Source: Office of the Inspector General. For cost overruns replacing asbestos, see the OIG report on NASA’s Management of the Space Launch System Booster and Engine Contracts. The Department of Defense paid $130 million for a Falcon Heavy launch in 2023.

[8] Rocket Lab developed, tested, and flew its Electron rocket for a total program cost of $100 million.

[9] In particular, the separation bolts embedded in the Orion heat shield were built based on a flawed thermal model, and need to be redesigned to safely fly a crew. From the OIG report:

Separation bolt melt beyond the thermal barrier during reentry can expose the vehicle to hot gas ingestion behind the heat shield, exceeding Orion’s structural limits and resulting in the breakup of the vehicle and loss of crew. Post-flight inspections determined there was a discrepancy in the thermal model used to predict the bolts’ performance pre-flight. Current predictions using the correct information suggest the bolt melt exceeds the design capability of Orion.

The current plan is to work around these problems on Artemis 2, and then redesign the components for Artemis 3. That means astronauts have to fly at least twice with an untested heat shield design.

[10] Orion/ESM has a delta V budget of 1340 m/s. Getting into and out of an equatorial low lunar orbit takes about 1800 m/s, more for a polar orbit. (See source.)

[11] It takes about 900 m/s of total delta V to get in and out of NHRO, comfortably within Orion/ESM's 1340 m/s budget. (See source.)

[12] In Carrying the Fire, Apollo 11 astronaut Michael Collins recalls carrying a small notebook covering 18 lunar rendezvous scenarios he might be called on to fly in various contingencies. If the Lunar Module could get itself off the surface, there was probably a way to dock with it.

For those too young to remember, Tang is a powdered orange drink closely associated with the American space program.

[13] For a detailed (if somewhat cryptic) discussion of possible Artemis abort modes to NRHO, see HLS NRHO to Lunar Surface and Back Mission Design, NASA 2022.

[14] This is my own speculative guess; the answer is very sensitive to the dry weight of HLS and the boil-off rate of its cryogenic propellants. Delta V from the lunar surface to NRHO is 2,610 m/sec. Assuming HLS weighs 120 tons unfueled, it would need about 150 metric tons of propellant to get into NRHO from the lunar surface. Adding safety margin, fuel for docking operations, and allowing for a week of boiloff gets me to about 200 tons.

[15] The main safety issue is the difficult thermal environment at the landing site, where the Sun sits just above the horizon, heating half the lander. If it weren't for the NRHO constraint, it's very unlikely Artemis 3 would spend more than a day or two on the lunar surface.

[16] The ISS program has been repeatedly extended, but the station is coming up against physical limiting factors (like metal fatigue) that will soon make it too dangerous to use.

[17] Recent comments by NASA suggest SpaceX has voluntarily added an ascent phase to its landing demo, ending a pretty untenable situation. However, there's still no requirement that the unmanned landing/ascent demo be performed using the same lander design that will fly on the actual mission, another oddity in the HLS contract.

[18] To be precise, I'm talking about moving bulk propellant between rockets in orbit. There are resupply flights to the International Space Station that deliver about 850 kilograms of non-cryogenic propellant to boost the station in its orbit, and there have been small-scale experiments in refueling satellites. But no one has attempted refueling a flown rocket stage in space, cryogenic or otherwise.

[19] Both SpaceX's Kathy Lueders and NASA confirm Starship needs to launch from multiple sites. Here's an excerpt from the minutes of the NASA Advisory Council Human Exploration and Operations Committee meeting on November 17 and 20, 2023:

Mr. [Wayne] Hale asked where Artemis III will launch from. [Assistant Deputy AA for Moon to Mars Lakiesha] Hawkins said that launch pads will be used in Florida and potentially Texas. The missions will need quite a number of tankers; in order to meet the schedule, there will need to be a rapid succession of launches of fuel, requiring more than one site for launches on a 6-day rotation schedule, and multiples of launches.

[20] Falcon 9 first flew in June of 2010 and achieved a weekly launch cadence over a span of six launches starting in November 2020.

[21] Recovering Superheavy stages is not a NASA requirement for HLS, but it's a huge cost driver for SpaceX given the number of launches involved.

Read the whole story
digdoug
31 days ago
reply
Jesus this is brutal.
Louisville, KY
WorldMaker
31 days ago
It’s fascinating. SLS versus SpaceX versus Blue Origin. SLS unlikely to succeed. If SpaceX or Blue Origin actually succeed at their crazy goals then SLS is entirely unnecessary. If NASA does everything with SLS it will be a technical miracle. If SpaceX or Blue Origin succeed it will be less of a miracle, but still equally surprising at the timeline given. It’s a huge win for NASA though if Billionaires pay for the actual hard stuff. SpaceX and Blue Origin and some of NASA are doing everything with the idea of the Moon as a “gas station” on the way to Mars, which is more exciting than the Moon anyway. SLS barely gets to ISS, much less the Moon and so badly near sighted at reliving the shuttle glory days instead of moving the program forward. I don’t know who to root for, other than for NASA itself, and maybe against the SLS, as much as I appreciate pork barrels.
Share this story
Delete

The Great Flattening

1 Comment and 2 Shares

Apple did what needed to be done to get that unfortunate iPad ad out of the news; you know, the one that somehow found the crushing of musical instruments and bottles of paint to be inspirational:

The ad was released as a part of the company’s iPad event, and was originally scheduled to run on TV; Tor Myhren, Apple’s vice-president of marketing communications, told AdAge:

Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world…Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.

The apology comes across as heartfelt — accentuated by the fact that an Apple executive put his name to it — but I disagree with Myhren: the reason why people reacted so strongly to the ad is that it couldn’t have hit the mark more squarely.

Aggregation Theory

The Internet, birthed as it was in the idealism of California tech in the latter parts of the 20th century, was expected to be a force for decentralization; one of the central conceits of this blog has been to explain why reality has been so different. From 2015’s Aggregation Theory:

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

Aggregation Theory

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

In short, the analog world was defined by scarcity, which meant distribution of scarce goods was the locus of power; the digital world is defined by abundance, which means discovery of what you actually want to see is the locus of power. The result is that consumers have access to anything, which is to say that nothing is special; everything has been flattened.

  • Google broke down every publication in the world into individual pages; search results didn’t deliver you to the front page of a newspaper or magazine, but rather dropped you onto individual articles.
  • Facebook promoted user-generated content to the same level of the hierarchy as articles from professional publications; your feed might have a picture of your niece followed by a link to a deeply-reported investigative report followed by a meme.
  • Amazon created the “Everything Store” with practically every item on Earth and the capability to deliver it to your doorstep; instead of running errands you could simply check out.
  • Netflix transformed “What’s on?” to “What do you want to watch?”. Everything from high-brow movies to budget flicks to prestige TV to reality TV was on equal footing, ready to be streamed whenever and wherever you wanted.
  • Sites like Expedia and Booking changed travel from an adventure mediated by a travel agent or long-standing brands to search results organized by price and amenities.

Moreover, this was only v1; it turns out that the flattening can go even further:

  • LLMs are breaking down all written text ever into massive models that don’t even bother with pages: they simply give you the answer.
  • TikTok disabused Meta of the notion that your relationships were a useful constraint on the content you wanted to see; now all short-form video apps surface content from across the entire network based on their understanding of what you individually are interested in.
  • Amazon is transforming into a logistics powerhouse befitting the fact that Amazon.com is increasingly dominated by 3rd-party merchant sales, and extending that capability throughout the economy.
  • All of Hollywood, convinced that content was what mattered, jointly killed the linear TV model to ensure that all professionally-produced content was available on-demand, even as YouTube became the biggest streamer of all with user-generated content that is delivered through the exact same distribution channel (apps on a smart device) as the biggest blockbusters.
  • Services like Uber and Airbnb commoditized transportation and lodging to the individual driver or homeowner.

Apple is absent from this list, although the App Store has had an Aggregator effect on developers; the reason the company belongs, though, and why they were the only company that could make an ad that so perfectly captures this great flattening, is because they created the device on which all of these services operate. The prerequisite to the commoditization of everything is access to anything, thanks to the smartphone. “There’s an app for that” indeed:

This is what I mean when I say that Apple’s iPad ad hit the mark: the reason why I think the ad resonated so deeply is that it captured something deep in the gestalt that actually has very little to do with trumpets or guitars or bottles of paint; rather, thanks to the Internet — particularly the smartphone-denominated Internet — everything is an app.

The Bicycle for the Mind

The more tangible way to see in which that iPad ad hit the mark it to play it in reverse:

This is without question the message that Apple was going for: this one device, thin as can be, contains musical instruments, an artist’s studio, an arcade machine, and more. It brings relationships without borders to life, complete with cute emoji. And that’s not wrong!

Indeed, it harkens back to one of Steve Jobs’ last keynotes, when he introduced the iPad 2. My favorite moment in that keynote — one of my favorite Steve Jobs’ keynote moments ever, in fact — was the introduction of GarageBand. You can watch the entire introduction and demo, but the part that stands out in my memory is Jobs — clearly sick, in retrospect — moved by what the company had just produced:

I’m blown away with this stuff. Playing your own instruments, or using the smart instruments, anyone can make music now, in something that’s this thick and weighs 1.3 pounds. It’s unbelievable. GarageBand for iPad. Great set of features — again, this is no toy. This is something you can really use for real work. This is something that, I cannot tell you, how many hours teenagers are going to spend making music with this, and teaching themselves about music with this.

Jobs wasn’t wrong: global hits have originated on GarageBand, and undoubtedly many more hours of (mostly terrible, if my personal experience is any indication) amateur experimentation. Why I think this demo was so personally meaningful for Jobs, though, is that not only was GarageBand about music, one of his deepest passions, but it was also a manifestation of his life’s work: creating a bicycle for the mind.

I remember reading an Article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B, and the condor won: it came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle. Human riding a bicycle blew away the condor, all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes, and so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities.

I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, but I think that’s nothing compared to what’s coming in the next 100 years.

In Jobs’ view of the world, teenagers the world over are potential musicians, who might not be able to afford a piano or guitar or trumpet; if, though, they can get an iPad — now even thinner and lighter! — they can have access to everything they need. In this view “There’s an app for that” is profoundly empowering.

After the Flattening

The duality of Apple’s ad speaks to the reality of technology: its impact is structural, and amoral. When I first started Stratechery I wrote a piece called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction. With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Change is guaranteed, but the type of change is not; never is that more true than today. See, friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

Today that exhortation might run in the opposite direction: in our angst about the removal of specialness and our eagerness to criticize the bad, we ought not lose sight of the potential good.

Start with this site that you are reading: yes, the Internet commoditized content that was previously granted value by virtue of being bundled with a light manufacturing business (i.e. printing presses and delivery trucks), but it also created the opportunity for entirely new kinds of content predicated on reaching niche audiences that are only sustainable when the entire world is your market.

The same principle applies to every other form of content, from music to video to books to art; the extent to which being “special” meant being scarce is the extent to which the existence of “special” meant a constriction of opportunity. Moreover, that opportunity is not a function of privilege but rather consumer demand: the old powers may decry that their content is competing with everyone on the Internet, but they are only losing to the extent that consumers actually prefer to read or watch or listen to something else. Is this supposed to be a bad thing?

Moreover, this is just as much a feather in Apple’s cap as the commoditization of everything is a black mark: Apple creates devices — tools — that let everyone be a creator. Indeed, that is why the ad works in both directions: the flattening of everything means there has been a loss; the flattening of everything also means there is entirely new opportunity.

The AI Choice

One thing I do credit Apple for is not trying to erase the ad from the Internet — it’s still posted on CEO Tim Cook’s X account — because I think it’s important not just as a marker of what has happened over the last several years, but also the choices facing us in the years ahead.

The last time I referenced Steve Jobs’ “Bicycle of the Mind” analogy was in 2018’s Tech’s Two Philosophies, where I contrasted Google and Facebook on one side, and Microsoft and Apple on the other: the former wanted to create products that did things for you; the latter products that let you do more things. This was a simplified characterization, to be sure, but, as I noted in that Article, it was also related to their traditional positions as Aggregators and platforms, respectively.

What is increasingly clear, though, is that Jobs’ prediction that future changes would be even more profound raise questions about the “bicycle for the mind” analogy itself: specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?

The route to the former seems clear, and maybe even the default: this is a world where a small number of entities “own” AI, and we use it — or are used by it — on their terms. This is the outcome being pushed by those obsessed with “safety”, and demanding regulation and reporting; that those advocates also seem to have a stake in today’s leading models seems strangely ignored.

The alternative — MKBHDs For Everything — means openness and commoditization. Yes, those words have downsides: they mean that the powers that be are not special, and sometimes that is something we lament, as I noted at the beginning of this Article. Our alternative, though, is not the gatekept world of the 20th century — we can’t go backwards — but one where the flattening is not the elimination of vitality but the tilling of the ground so that something — many things — new can be created.

Read the whole story
digdoug
37 days ago
reply
This really hits so many nails on the head.
Louisville, KY
Share this story
Delete

An aged creation: Unveiling the LEGO whiskey distillery

1 Share

Take a look at this intriguing LEGO set designed for ages 21 and above. Crafted by builder Versteinert and titled ‘Whiskey Distillery,’ it showcases a plethora of imaginative uses for both common and uncommon pieces, resulting in a creation seemingly tailored for adult enthusiasts. This model serves as the builder’s entry for the third round of the 2024 RogueOlympics, a contest that tasks participants with creating designs using no more than 101 Lego elements. The theme for this round was ‘Volume,’ and I find the approach to such a simple word quite refreshing. Upon closer inspection of the build, one can spot a couple inside-out tires, a selection of Harry Potter wands, a gray cattle horn, and even a magic lamp unique to a certain Disney Villain, among other elements.

Whiskey Distillery

The post An aged creation: Unveiling the LEGO whiskey distillery appeared first on The Brothers Brick.

Read the whole story
digdoug
67 days ago
reply
Louisville, KY
Share this story
Delete

Total Eclipse Of The Mind | NOEMA

1 Share

Credits

Laurence Pevsner is an inaugural Moynihan Public Scholar at the City College of New York. From 2021 to 2023, he was the director of speechwriting for the U.S. ambassador to the United Nations.

By the time you read this, I will be on the west coast of Mexico, hoping to see the sun vanish. The forecast right now is for clouds, which isn’t the kind of disappearance act I’m after. But if we’re lucky enough to get a clear day on April 8, at exactly 9:51:23, I’ll look up as the moon appears to collide with the sun. I’ll spend an hour watching the moon spread its shadow through solar glasses, finally removing them at 11:07:25 when the moment I’ve been waiting for arrives. And then, with naked eyes, despite having witnessed two total solar eclipses before, I don’t know what I’ll see.

I’ve been entranced by solar eclipses ever since I was 14 and read Isaac Asimov’s “Nightfall and Other Stories” on my summer camp cot. The titular story is about a planet that has six suns; at least one is in the sky for all people, all the time. Their scientists theorize there might be a couple of other stars beyond their solar system, but that their own world is the focal point. The planet’s constant daylight obscures the truth — that they are in the midst of a 30,000-star cluster.

One day, an undiscovered moon slots into place, setting off an extremely rare three-way solar eclipse that plunges the planet into terrifying darkness. The glittering stars in the black sky reveal to the people that their planet is much less significant than they believed. That, combined with the darkness no one has ever experienced before, causes the whole world to go berserk. Civilization burns itself down as people light fires to stave off the night.

We figure out this is a repeating cycle. The eclipse happens every 2,049 years, but the survivors can’t seem to pass down the story of the moon or the stars or the eclipse far enough to save their descendants. Their society can’t remember anything that long.

The night after I finished the story, I stared at the New Hampshire sky until my eyes hurt. Before “Nightfall,” I’d thought of stars as bright and beautiful pinpricks. Now they appeared as little beacons of hubris. What don’t we know, the story made me wonder — what don’t we remember?

The more you learn about total solar eclipses, the more impossible they seem. For one thing, it is a complete cosmic coincidence that the sun and the moon can appear to be the same size in the sky. The moon happens to be about 400 times smaller than the sun, and the sun happens to be about 400 times further from the Earth than the moon. That happenstance geometry is what allows for a perfect match for our celestial discs, making our solar eclipse unique in the solar system and likely far beyond.

For another thing, our solar eclipses are a coincidence of time: 50 million years ago, the moon was too close to Earth, and 50 million years from now, it will be too far away. Outside this relatively small window of time, eclipses aren’t precisely and aesthetically aligned. So we’re living on the perfect planet at the perfect moment to see a near-perfect alignment, this ultimate trick of the light.

Eclipses pose a significant challenge for writers, mostly because they’re hard to describe without sounding like you’re exaggerating. Annie Dillard, in her essay “Total Eclipse,” famously wrote that “seeing a partial eclipse bears the same relation to seeing a total eclipse as kissing a man does to marrying him.”

“it is a complete cosmic coincidence that the sun and the moon can appear to be the same size in the sky.”

Asimov was himself once asked to provide a non-narrative, scientific description for Look Magazine. He was complimented by his editors for his simple and accurate and beautiful account. “You will see immediately around the black disc of the Moon a pinkish rim,” he wrote. “That is the Sun’s lower atmosphere. It will contain spikes and streamers in graceful arcing curves. … They will fade and grow more delicate until there is the pearly whiteness of the true corona stretching out unevenly from blackness of the covered Sun.”

Just one problem: Unbeknownst to his editors, Asimov had never actually seen a total solar eclipse in person. He admits in his autobiography that he carefully elided the truth; his portrayal was based on what he’d read, not what he’d seen.

In 2017, determined to one-up Asimov, my family arrived in central Wyoming prepared to see and capture the eclipse. Both the seeing and the capturing involve a lot of gear: We lugged two extra suitcases full of specialty solar-filtered binoculars, two stabilizing binoculars with homemade cardboard cutout solar filters you can place over the lenses, three pairs of solar-filter sunglasses for easier viewing, three tripod stands, one laptop, one umbrella and one DSLR camera whose specifications are a complete mystery to me. Together, we unfolded, unscrewed and set the stands in different corners according to my dad’s precise directions. All for an event that lasts less time than it takes to fry an egg.

We did all this advanced prep work because, as my astronomy-obsessed dad has told us many times, something is bound to go wrong, and you don’t want to be fiddling with knobs during those precious 165 seconds. Our plan was to have three cameras capture three different aspects of the experience. The first was just an iPhone, which would take a video of the surrounding environs to capture how quickly the sky goes dark, how the animals freak and flee, how the sun seems to set on every horizon. The second was also an iPhone, turned to face us to record our reactions. 

The big DSLR camera would be taking photos of the eclipse itself. My dad, a software developer, had it hooked up to a computer running a program he wrote that would automatically make the camera take photos at the exact right moments.

There are two reasons this program is superior to manually operating the camera. The first is that there are certain, precise images that are hard to time by hand that you want to capture, like when the eclipse is seconds away from being full and the jagged and uneven mountains and carved-out craters of the Moon let a few tiny patches of sun escape the Moon’s coverage. Astronomers refer to these as Baily’s beads, after the astronomer Francis Baily, who described them in 1836 as “a row of lucid points, like a string of beads.” When only one or two beads are left, as the last bit of photosphere disappears, a burst of light emits what is known as the “diamond-ring effect.” The name is apt: the ring of light surrounding the moon is capped, for just a fraction of a moment, with what looks like a sparkling jewel.

The second reason for the program is so you can enjoy the eclipse. After all, your eyes will see something far more spectacular than the camera.

But we don’t always remember what we see. The mind plays tricks. Memory is finicky. Do you remember what you were doing five years ago? Two? What about a week ago? What can you recall? Can you remember anything of your inner mind, your thoughts at that time? What were you feeling? What mattered? And even if you thought you did have an inkling, how would you know if you were right?

Personally, I find these questions taxing. Overwhelmed, I want to reach for my phone and look at my calendar, remind myself of my schedule, get a hint as to what was going on. But this, it seems to me, is cheating. And horrifying. I do not remember my own life.

In 2011, while traveling around the U.S. and Canada, the director and entrepreneur Cesar Kuriyama started recording one-second-long video clips each day, which he later stitched together into a 365-second film. One second can feel long or short, and Kuriyama’s video was a mixed procession of the mundane, profane, ordinary and extraordinary. He’s making oatmeal and then he’s at a nightclub and then he’s reading a book and then he’s clicking around on Facebook and then he’s playing minigolf and then he’s spray-painting cars and then and then and then.

A voice comes over the video: “Imagine,” Cesar says, “a movie that includes every day of the rest of your life. … I’ll never forget a day ever again.”

That’s what I wanted.

Still, I was skeptical. What could one second of video really do? As Cesar pointed out in a TED Talk, even one second serves as a powerful stimulus. It’s not just one point — it’s a series. The motion and sound, combined with the image, do much to jog your memory.

When I tried recording one-second video clips during my junior year of college, I sought out flattering footage so that the movie of myself would look cool. This had a salutary effect — it encouraged me to do more cool stuff. It also had the desired effect: Each second I recorded helped me remember far more than that one second. Looking back at the video after six months, I found my memories sharper. More than a decade later, the period is still easier to recall than others.

Every night before I went to bed, I would review that day’s footage and choose a clip. The act of choosing ended up bothering me. Selecting a scene reoriented my memory of the day — that one second became, by default, the day’s most important memory. And sometimes, it was a lie. Clips of book talks or costume parties or studying in the library created a different sense of myself than ones of me refreshing Reddit or playing online chess alone in my dorm.

I quit the project after a year. It made it too tempting to deceive myself about my own life.


In Wyoming in the hours before the eclipse, we found ourselves in a field next to a barn among a small community of umbraphiles — lovers of shadows, eclipse chasers. Families spread out, claiming turf. I met an area astronomer, a skinny man with a long white beard, who was fielding questions from other onlookers. I asked: “What does a solar eclipse actually look like?”

“I have no idea,” he said. “This will be my first, so we’ll find out together.”

In the lead-up to the frenetic moments of a total eclipse is the slow march of the partial. The partial starts as the moon kisses the sun and then bites into it, steadily chomping away. Even though it was only midday, the surrounding mountains darkened. We pulled on hats and scarves as the temperature plummeted. We sipped from mugs of now-cold coffee, we kicked at the grass, we observed the way the shadows bounced off the trees and pointed daggers down the mountains. We triple-checked our setup, we looked up through filtered lenses and then glanced at each other. Then there was nothing left to do but watch.

Afterward, everyone had a different description of the total eclipse. My brother said: “It was like someone was waving and bending a metal sheet in the sky!” “A 360-degree sunset,” added my mom. “You turned your head and the pinks and blues and reds came from every direction.”

When it was over, we all gathered around the computer to look at the photos. The main shot was a perfect black disc surrounded by a ring of white glowing light. We’d also successfully captured Bailey’s beads and the elusive diamond ring. My dad’s system had worked. Technically his photos turned out great, professional even.

But they all looked wrong to me. I learned then that all photos of eclipses, just like all written descriptions of them, fail to capture what they really look like to the human eye.

“The image I had in my head — of the shimmering corona, of the infinite sunsets, of the charged light — felt made up when I was looking at genuine photographic evidence of the event.”

Our brains and eyes excel at interpreting scenes that photographers refer to as having high dynamic range — both very bright and very dark objects. If you use the light meter on a camera, you’ll find that it accurately registers the objective brightness of a polished white piece of marble as “darker” if you’re indoors than a pure black chunk of obsidian if you’re outdoors. Our brain’s extremely sophisticated visual system instantly and automatically adjusts the raw input from our eyes to account for this discrepancy, because it knows that black obsidian is dark and the white marble is bright.

Even advanced cameras aided by software fall short. A total solar eclipse presents the ultimate challenge: The corona of the sun is the definition of bright, but it is covered by the pitch-black circle of the moon and surrounded by extraordinarily dim background objects like stars and planets. The contrast between light and dark is at the farthest extremities of dramatic.

After the eclipse, I was flushed, I wanted to do laps around the barn. I felt like I could lift a car. But instead, I tried to sear the memory of what I had seen into my brain; I tried to reimagine what had looked impossible.

But days later, the captured images had merged with my memory, and I felt the original slip away like a mirage. The image I had in my head — of the shimmering corona, of the infinite sunsets, of the charged light — felt made up when I was looking at genuine photographic evidence of the event. I couldn’t tell what my imagination had substituted in, what had been affected by the photos or by our verbal descriptions, what I had forgotten or filled in. Thinking of it now — the most dramatic and affecting experience of the natural world in my life — I have no idea what in my memory is true.

The only solution was to see it again.


The region of the Andean foothills in northern Chile known as the Elqui Valley is famous for four things: pisco and wine distilleries, astrological spirit guides that tell fortunes and offer crystals, the poet Gabriela Mistral, and astronomical research observatories. We didn’t need the observatory telescopes to see the solar eclipse. But of all the spots in the 2019 eclipse’s path of totality, we picked this area for the same reason the researchers did: almost guaranteed clear skies.

What we didn’t account for, though, was a lack of cell service. My dad’s program doesn’t work properly without precise geographic coordinates. We went ahead and set up like we’d practiced, this time with a camera that would record a time-lapse of a digital thermometer to show the dramatic drop in temperature. We then discovered that cell phones can produce your longitude and latitude even without service. After some fussing, everything seemed to work.

We nibbled on saltines. When the partial started, someone came up with the clever idea to poke holes through a piece of paper to spell out our family name. The light-holes that shined through and spelled P-E-V-S-N-E-R each had a chunk taken out of them from the moon. We admired this. The left sides of our own shadows were blurred, the rights still sharp. The mysterious so-called “shadow bans” appeared — slivering, snake-like shades that ripple in rows on the ground, seemingly coming from nowhere. Scientists still can’t explain why this happens, a fact we love telling each other, reassuring ourselves that solar eclipses still hold mysteries.

If you see one total solar eclipse, then you’re someone who has seen one total solar eclipse. If you have seen two, you are an eclipse chaser.

I love the title. Calling it a “chase” adds some drama. It stirs up images of storm chasers with their oversized radios and steel-reinforced jeeps charging toward a tornado. But on my way to Chile, I found myself asking what it was, exactly, that I was chasing.

“No two solar eclipses are alike.”

For a long time, I thought it was my memories. What did I really see? Is my memory true? Can I trust my own mind? But when the total eclipse in Chile slotted into place and the dazzling display began in earnest, I couldn’t believe my eyes.

What I was looking at was not what I had remembered from Wyoming. Nor did it look like the pictures. Nor did it restore my original memory. I knew — with absolute certainty — that what I was looking at was different. The first eclipse had been wavy, broiling, like bright white lava popping and crackling against the sky. This one was calm and sharp, a dramatic relief. In its surprising neatness and definition, it stuck out all the more, as if an impossibly black hole had yawned open over the mountains.

I learned in that moment that no two solar eclipses are alike. Different positions in the sky, different makeups of the atmosphere, different sunspot patterns — all and more combine to create a unique effect, every time.

No two eclipse viewers are alike either. Even as an individual, you’re different each time you see one. You come at it from a different place: different expectations before, different experiences during, different recollections after.

Just as moments in the past can’t be perfectly remembered or recreated, so too nature is ever different. The movement and interactions of species, moons, planets and stars are never the same twice.

What we chase, then, is the next one. We go forward, in hot pursuit of an indescribable, irreproducible experience. Yes, we chase because we are inspired by what we saw once — but really, we chase because we know what’s most important is that we see another.

Read the whole story
digdoug
78 days ago
reply
Louisville, KY
Share this story
Delete
Next Page of Stories