Every hour, the Earth receives enough sunlight to power all of human civilization for a year. It arrives silently, from all directions, at no cost. We’ve known for decades how to convert this energy into usable electricity at scale using one of the most abundant elements on Earth.
Today, we still get most of our energy from resources that are gone the moment you burn them and leave the atmosphere measurably worse on the way out. Some of the friction is legitimate: a lot of our power infrastructure was built for a different era with different goals. Some of it is a political environment where the debate is around whether or not solar energy even works.
The physics has been understood for decades and the cost has fallen faster than basically any other technology in history. This piece will cover the science, the economics, and the infrastructure of solar energy.
Here Comes The Sun
When you smash two atoms together, a couple of things happen. They combine into a new atom, and a ton of energy is released in the form of heat and light. This is the principle behind nuclear fusion.
The Sun has been doing this for billions of years at a completely incomprehensible scale. Hydrogen atoms get smashed together in the core to eventually form helium. When the Sun eventually runs out of hydrogen, it will start fusing helium into heavier elements like carbon and oxygen. Bigger stars repeat this process all the way up to iron, essentially producing every element that exists.
This is where Carl Sagan's famous saying “we are made of star stuff” comes from. The calcium in your bones, the iron in your blood, and the silicon in your smartphone all come from ancient stars.
This fusion process produces incredibly powerful electromagnetic radiation. Starting as gamma rays in the core, it takes hundreds of thousands of years for that energy to work its way to the surface of the Sun. From there, it takes another eight minutes to reach Earth as a mix of infrared, visible, and ultraviolet light.
Einstein, who is now officially a recurring character in this series (see the GPS piece), observed in 1905 that this light travels not just as a continuous wave, but also as a blast of trillions of tiny discrete particles called photons. Each photon carries a specific amount of energy, which is determined by its frequency. Lower energy photons have longer wavelengths and are redder, while higher energy photons have shorter wavelengths and are bluer.
It was this explanation of the photoelectric effect at the age of 26, not his work on relativity, that eventually won him the Nobel Prize in Physics.
Enter Silicon
Let's zoom all the way in.
Silicon (Si) is a chemical element found in great abundance on Earth. It makes up about 27% of the Earth's crust by weight. It's what sand is made of.
Its atomic number is 14, which means it has 14 protons in its nucleus and 14 electrons orbiting it.
The outermost four electrons are the valence electrons, which are responsible for chemical bonding. When two silicon atoms sit next to each other, they can each contribute one valence electron into the space between them, forming a covalent bond. Those shared electrons belong to both atoms at once, holding them together.
When silicon atoms bond in all four directions at once, they naturally arrange into a precise repeating three-dimensional structure called a lattice. This is what the dark blue cells you see on a rooftop solar panel are made of.
When The Photons Hit
Every material has a precise energy level where photons of that energy will knock valence electrons free. This is called the bandgap.
Insulators like glass have bandgaps so large that almost no photon can cross them. Conductors like copper have essentially no bandgap at all, so electrons flow freely through them. Silicon is somewhere in-between: a semiconductor with a bandgap of 1.12 eV, which is the energy carried by a near-infrared photon.
Why does the bandgap exist at all?
In a single isolated atom, electrons can only exist at specific discrete energy levels, like rungs on a ladder with forbidden gaps between them.
When billions of atoms are packed together into a crystal lattice, those individual energy levels broaden into continuous “bands”. The gap between the highest filled band, the valence band, and the lowest empty one, the conduction band, is the bandgap.
Its exact size depends on the specific atomic structure and spacing of the crystal, which is why every material has a different one. The full explanation requires quantum mechanics and Bloch’s theorem, which is well beyond this piece. The Wikipedia article on bandgap theory is a good starting point if you want to go deeper.
When photons hit the lattice, they can free electrons and generate heat as a byproduct. The name of the game is to maximize freed electrons and minimize heat, since the electrons are what we want to convert into electricity. Put another way, we want as much energy from the light as possible to go into freeing electrons and as little as possible to go into heat.
You can use the diagram below to see how the specific energies of photons, or color, interact with the lattice.
Playing with the diagram above shows that, theoretically, hitting the lattice with photons of a single precise frequency (essentially a laser), would produce the most freed electrons and the least heat. However, the cost of building a laser is much higher than the cost of building a solar panel. Instead, we handle the Sun's light as it naturally comes in.
Can We Do Better?
The Sun is a constant; it always sends us the same continuous mix of frequencies. We can experiment with different materials with different bandgaps, but it turns out silicon’s 1.12 eV is close to the theoretical optimum for converting sunlight into electricity. This works out well for us, given how abundant it is.
William Shockley and Hans Queisser worked out the math in 1961, seven years after the first working silicon solar cell was built at Bell Labs. They showed that for a cell tuned to a single bandgap (also called a single-junction cell), regardless of material or engineering, there's a hard theoretical ceiling on efficiency of around 33%. This is the Shockley-Queisser limit.
In practice, rooftop panels land between 20% and 23% efficiency. The gap between theory and reality comes from reflection off the surface, electrons recombining before they reach the junction, and other real-world losses.
There are ways to push past the single-junction limit. The most effective is stacking multiple layers of different semiconductor materials on top of each other, each with a different bandgap tuned to a different slice of the solar spectrum. These multi-junction cells can exceed 40% efficiency, but they are extraordinarily expensive to manufacture, so in practice they're mostly used in specialized applications where squeezing out every last bit of efficiency is more important than cost, like spacecraft.
One exciting near-term frontier is perovskites, a class of materials with a tunable bandgap that can be adjusted by tweaking their chemical composition. Layered on top of silicon, a perovskite-silicon tandem cell could theoretically push past 40% efficiency at a price point close to conventional silicon. Labs around the world are racing to make it work at scale. It all sounds a bit like science fiction to me, but it's real.
Doping The Lattice
Pure silicon is a poor conductor. It has very few free charge carriers in its natural state, so even when photons knock electrons loose, they quickly recombine and nothing particularly useful happens.
The fix is subtle. By introducing trace amounts of impurities into the crystal, roughly one foreign atom per million silicon atoms, you can completely change how it behaves electrically. This process, almost too simple to be true, is called doping.
The solar cells we commonly use are made of a single silicon crystal with two distinct regions side by side. One has a trace of phosphorus mixed in, giving it extra free electrons with nowhere to go: this is the n-type side. The other has a trace of boron, where each boron atom is missing one valence electron, leaving a positively charged vacancy called a hole: this is the p-type side. The foundation of semiconductors is that electrons naturally migrate toward these holes.
What do the impurities actually do?
Phosphorus has 5 valence electrons instead of silicon’s 4. When a phosphorus atom sits in the silicon lattice, it forms the same four covalent bonds as its neighbors but has one electron left over with no bond to join. That electron is only loosely held and essentially free to move through the crystal.
Boron is the mirror image. With only 3 valence electrons, it can only form three bonds, leaving one bond site unfilled. That vacancy is the hole, and it propagates through the lattice as neighboring electrons shift to fill it.
Where these two regions meet, electrons diffuse from the n-side toward the p-side and holes diffuse the other way. They recombine near the boundary, leaving behind a thin zone depleted of free carriers called the depletion zone. This charge separation creates a built-in electric field. When light hits the crystal and frees an electron near the junction, that field separates the electron from its hole before they can recombine, pushing the electron toward the n-side and the hole toward the p-side. That separation is what drives current through the wire.
The remarkable thing is how little doping is needed. Concentrations as low as one part per billion can measurably change a material’s conductivity. For a deeper dive, the Wikipedia article on semiconductor doping covers the full picture.
Where these two regions meet, a built-in electric field forms at the boundary. When a photon frees an electron nearby, the field grabs it and pulls it toward the wire before it can recombine. This is the photovoltaic effect: the direct conversion of light into electricity through a semiconductor material.
Is this similar to how LEDs work?
Yes, almost exactly, but in reverse. A solar cell takes photons in and produces current. An LED takes current in and produces photons.
In an LED, electrons are pushed across the p-n junction from the other direction. When an electron falls into a hole, it releases the energy difference as a photon. The color of the light depends on the bandgap of the material, which is why different LED materials produce different colors. Gallium nitride produces blue light. Indium gallium phosphide produces red.
This is also why LEDs are so efficient: the same mechanism that makes solar cells work makes LEDs work. The Wikipedia article on LED physics goes into the full detail.
From The Panel To Your Plug
The animation above shows a closed loop, where the electron ends up in the same system it started in. In reality, the system interconnects with a massive shared network.
Your panels produce DC (direct current) electricity, meaning electrons only ever flow in one direction. An inverter, a box usually mounted near your electrical panel, converts it to AC (alternating current), which is what your home and the grid run on since it's easier to transmit over long distances.
Your home uses what it needs first. If your panels are producing more than you’re consuming, the excess flows backward through your meter into the grid, or into a battery if you have one. Your utility credits you for what you export and charges you for what you draw in a system called net metering. At night, when your panels are dark, the process flips and you draw from the grid or your battery.
Solar's Steep Cost Curve
There's a theory in manufacturing called Wright's Law: for every doubling of cumulative production, costs fall by a fixed percentage. In other words, the more of something humanity has ever made, the better it gets at making it. Solar has been one of the most dramatic demonstrations of this principle in history. The cost of solar cells has dropped by roughly 20% for every doubling of global production.
A few things have made the curve even steeper than economic theory alone would predict. China made a strategic decision to invest heavily in solar manufacturing, driving prices down through sheer volume. And because solar cells are closely related to the semiconductors that power all of modern computing, advances in silicon purification and chip fabrication have spilled over into solar.
Too good to be true?
This piece has covered a lot of ground. We started with a nuclear reactor 93 million miles away, hundreds of thousands of years ago, and followed that energy all the way to rooftops around the world. It sounds too good to be true. So why isn't this technology everywhere?
For one, we have seen substantial growth. Global solar generation has grown over 10-fold since 2015 and now accounts for roughly 7% of global electricity. Nearly half of that comes from China, which made a strategic bet on solar manufacturing two decades ago and won.
But the cost picture is more complicated than the module price curve suggests. Solar cells themselves are now almost negligibly cheap, making up only about 13% of the total installed cost of a rooftop system in the United States. The inverter and other hardware add another third. The rest is everything else: installation labor, permitting fees, overhead, and profit.
These "soft costs" haven't fallen anywhere near as fast as the panels themselves, and in the US they're much higher than in other countries. An equivalent system costs less than a quarter of the US price in Australia, and the panels are identical. The difference is entirely in the soft costs.
Then there's the grid itself. The electricity network we rely on was designed around a small number of large, controllable power plants that could be dialed up and down on demand. Millions of small rooftop generators that all produce power only when it's light is a fundamentally different system. The best illustration of this is something called the duck curve.
The orange line in the duck curve represents a problem. When solar drops off at sunset, the grid needs to spin up an enormous amount of power very quickly. Ironically, the plants that can do that fastest tend to burn fossil fuels.
The solution is multi-faceted. Batteries have gotten dramatically cheaper in the last decade and grid-scale storage is growing. A more interconnected grid helps too; one that can pull power from wherever the sun is shining or the wind is blowing. And a mix of energy types, solar alongside wind, hydro, and nuclear, is more resilient than any one technology alone.
At the end of 2024, roughly 956 gigawatts of solar and wind projects were sitting in "interconnection queues". This is basically a massive waiting list for permission to plug into the grid. It's a pipeline of potential power equivalent to roughly three-quarters of the entire country's generating capacity. While not every project will be built, the scale of the backlog shows the real bottleneck: our 20th-century grid wasn't designed with enough on-ramps for all this new energy. The panels exist, the physics work, and the economics are sound; we just need the infrastructure to let it in.
Thank you!
If you like this type of content, you can follow me on BlueSky. If you wanted to support me further, buying me a coffee would be much appreciated. It helps us keep the lights on and the servers running! ☕
We're just getting started.
Subscribe for more thoughtful, data-driven explorations.
