The compact Ford Maverick earns the Edmunds Top Rated Truck award.
The Maverick is available with an efficient hybrid powertrain and there's a street truck Lobo variant too.
Ford sweeps this category: Our highly recommended runners-up are the F-150 and Ranger.
"The Maverick does exactly what you want a compact truck to do. It's easy to drive and park, and it hauls and tows more than its fair share. When equipped with its hybrid powertrain, it achieves surprisingly good gas mileage too."
— Kurt Niebuhr, manager, vehicle testing
Why did the Maverick win?
You don't need as much truck as you think, and the Ford Maverick proves it. Despite being the smallest pickup on sale, the Maverick packs capability, practicality, and utility into a package that fits anywhere. With an Edmunds Rating of 7.6 out of 10, it doesn't just acquit itself well in its class; it's great no matter how you slice it. Throw in an affordable starting price of $29,840 and the more than 40 mpg combined we've seen in our testing, and the Maverick becomes the consummate all-rounder that most pickup trucks aspire to be.
Highly Recommended
These are the Edmunds Top Rated 2026 honorable mentions we’d also recommend to our friends and family.
2026 Ford F-150
The Ford F-150 remains the best full-size pickup you can buy thanks to its sheer versatility. It offers a vast range of engine options, including a hybrid that adds both fuel efficiency and stunning performance. An optional onboard generator can power your major appliances in the event of an outage, the hardcore Raptor models are stunningly capable off-road, and the lower-level STX and XLT trims offer capability at an affordable price.
2026 Ford Ranger
The Ford Ranger offers great utility in a package that can be described as "right-sized" for most. If the F-150 is too big and you need to tow more than a Maverick can on a regular basis, the Ranger has you covered. Its starting price of $35,245 is downright affordable, it offers a great interior tech suite, and the Raptor model is guaranteed to put a smile on your face — whether you're off-road or not.
See the other Edmunds Top Rated 2026 award winners
As I discussed a few days ago (Part 1), there are two sides to the ABS Challenge coin:
the direct run benefit of flipping the call
the indirect cost of using up a challenge
Yesterday (Part 2), I went thru the calculations to establish the run value of an ABS Challenge given the base-out, ball-strike situations. And we determined the obvious: there is a huge benefit of getting an overturn on a 3-2 count, or when you can flip into/outof walk or strikeout, as the case may be. And first-pitch or second-pitch challenges, especially with bases empty, have little run potential.
Now, to complete our trilogy, we need to establish the breakeven point and to do that, we need to know about the cost.
Let's talk about something we are more used to, and that's the SB attempt. The typical runner situation will increase run potential of a SB by 0.20 runs and the cost of a CS by 0.45 runs. The breakeven point is 0.45/(0.45 + 0.20) ~= 70%. But in some situations, the cost of a CS jumps substantially, pushing the breakeven point to 85% or even 90%. In other situations, the cost of a CS reduces enough that the breakeven point goes down to only 60% if not lower. A smart baserunner will intuitively determine those breakeven points in their head based on experience.
The average umpire call will change the run expectancy by about 0.16 runs. A 2-0 call for example will turn the run potential of 0.094 runs into either 3-0 (0.202) or 2-1 (0.032). That's a range of 0.170 runs.
However, when a batter or catcher challenges a call, they are not going to challenge randomly, but rather strategically. And in 2025 AAA, the average run value of an overturned calls was about 0.20 runs. That is our target.
Naturally, when you have only one challenge remaining, the player will be even more careful with their challenge. As it turns out, that does not happen based on the game situation. The gain on the overturn calls is just 0.01 runs greater with one challenge remaining than with two.
And something similar happens with the inning of the challenge: the run value of the overturned call remains pretty flat throughout the game.
This really means that when a player challenges, they are more focused on the location of the pitch rather than focusing on when the pitch is happening. At least, that's what happens in AAA, and I'm quite sure things will change in MLB.
Anyway, with a fairly stable cost value of 0.20 runs, we can calculate our breakeven point by comparing the run impact of an overturned call to the base value of 0.20 runs. A bases loaded 3-2 count for example will impact the game by an astounding 1.8 runs. So the benefit to cost ratio is 9 to 1. And when you have those kinds of odds, you only need to be right 10% of the time to breakeven: 10% of +1.80 is balanced by 90% of -0.20. So you will find that any pitch that is close will get challenged at 3-2, bases loaded. It won't matter the inning and number of challenges remaining, since the benefit is huge. A close pitch is by definition a 50/50 call, so if the batter/catcher are as discerning as an umpire, they really have no choice but to challenge.
On the flip side is a bases empty, 2 outs, first-pitch call: in order to challenge that, the breakeven point is 88%. That means it has to be an egregiously obvious call in order to challenge. Even if you think you are pretty sure it was the wrong call, the player will not challenge it. Being 75% sure is not good enough. Even 80 or 85% is not good enough. You need to be 90% sure it was a bad call. That's because the difference between a 1-0 count (with bases empty, 2 outs) and an 0-1 count is the difference between 0.12 runs and 0.09 runs. With a difference of 0.03 runs, compared to our baseline of 0.20 runs, that's a ratio of 0.15 to 1. The odds are overwhelmingly against a challenge unless it's a sure thing.
Anyway, so that's how it works. If/when MLB players challenge more carefully with outs remaining and challenges remaining, I'll create an update breakeven chart to handle that nuance. Until then, here's the chart that every manager, catcher and batter is going to memorize (click to embiggen).
Pitchers are not going to bother because they think every close pitch they throw is a strike, and even their own catcher is going to fool them with Framing. Pitchers will have plenty on their minds already without needing to worry about their breakeven points of challenges. Luckily for them, they have a catcher who will know all this, 140 games a year. And the backup catcher? Well, they need to help their primary catcher, because this is a great way for them to show off their Challenge Awareness Skill.
I’m not sure whether to say “thank you” or “HOW COULD YOU DO THIS TO ME”, but this one goes out to all the people who sent me advice on buying software last fall.
This is the second in a two-part episode. The first part ended on a ✨cliffhanger!!!✨ — so if you missed the first episode, catch up here:
The response was overwhelming. I heard from software engineers, SREs, observability leads, CTOs, VPs, distinguished engineers, consultants, even the odd CISO. All these emails and responses and lengthy threads kept me busy for a while, but eventually I had to get back to writing. That’s when I discovered, to my unpleasant surprise, that I couldn’t seem to write anymore.
“Well,” I reasoned, “maybe I’ll just ask the internet for EVEN MORE advice” — and out popped Buffy-themed post number two, on October 13th.
Keep in mind, I thought I would be done by then. November was my stretch deadline, my just in case, I better leave myself some breathing room kind of deadline.
As November 1st came and went, my frustration began spiraling out into blind panic. What the hell is going on and why can I not finish this???
In which I finally listen to the advice I asked for
A week before Thanksgiving, I was up late tinkering with Claude. I imported all the emails and advice I had gotten from y’all, and started sorting into themes and picking out key quotes, and that is when it finally hit me: I had written the wrong thing.
No, this deserves a bigger font.
✨I wrote the wrong thing.✨
I wrote the wrong thing, for the wrong people, and none of it was going to move the needle in any meaningful way.
The chapters I had written were full of practical advice for observability engineering teams and platform engineering teams, wrestling with implementation challenges like instrumentation and cost overflows. Practical stuff.
Yes.
The internet was right (this ONE time)
My inbox, on the other hand, was overflowing with stories like these:
“Many times [competitive research] is faked. One person has their favorite option and then they do just enough ‘competitive analysis’ to convince the sourcing folks that due diligence was done or to nullify the CIO/CTO/whoever is accepting this on to their budget”
“We [the observability team] spent six months exhaustively trialing three different solutions before we made a decision. The CEO of one of the losing vendors called our CEO, and he overruled our decision without even telling us.” (Does your CEO know anything at all about engineering??) “No.”
“Our SRE teams have vetoed any attempt to modernize our tool stack. ($Vendor) is part of their identity, and since they would have to help roll out and support any changes, we are stuck living in 2015 apparently forever.” (What does management have to say?) “It’s been twenty years since they touched a line of code.”
“We’re weird in that most of the company hates technology and really hates that we have to pay for it since they don’t understand the value it brings to the company. This is intentional ignorance, we make the value props continually and well, we just haven’t succeeded yet….We’re a little obsessed with trying to get champagne quality at Boone’s prices.”
“When it comes to dealing with salespeople and the enterprise sales process, the best tip for engineers is to not anthropomorphize sales professionals who are driven by commission. The best ones are like robot lawn mowers dressed in furry unicorn costumes. They may seem cute and nice but they do not care about anything besides closing the next deal….All of the best SaaS companies are full of these friendly fake unicorn zombies who suck cash instead of blood.”
Nearly all of the emails I got were either describing a terminally fucked up buying process from the top down, or the long term consequences of those fucked up decisions.
In other words: I was writing tactical advice for teams who were surviving in a strategic vacuum.
So I threw the whole thing out, and started over from scratch. 😭
Even good teams are struggling right now
As Tolstoy once wrote, “Happy teams are all alike; every fucked up team is fucked up in its own precious way.”
There is an infinity of ways to screw something up. But there is one pattern I see a critical mass of engineering orgs falling into right now, even orgs that are generally quite solid. That is when there is no shared alignment or even shared vocabulary between engineering and other stakeholders directors, VPs and SVPs, CTO, CIO, principal and distinguished engineers — on some pretty clutch questions. Such as:
“What is observability?”
“Who needs it?”
“What problem are we trying to solve?”
And my favorite: “Is observability still relevant in a post-AI era? Can’t agents do that stuff now?”
Even some generally excellent CTOs[1] have been heard saying things like, “yeah, observability is definitely very important, but all our top priorities are related to AI right now.”
Which gets causality exactly backwards. Because your ability to get any returns on your investments into AI will be limited by how swiftly you can validate your changes and learn from them. Another word for this is “OBSERVABILITY”.
Enough ranting. Want a peek? I’ll share the new table of contents, and a sentence or two about a couple of my own favorite chapters.
Part 6: “Observability Governance” (v2)
The new outline is organized to speak to technical decision-makers, starting at the top and loosely descending. What do CTOs need to know? What do VPs and distinguished engineers need to know? and so on. We start off abstract, and become more concrete.
Since every technical term (e.g. high cardinality, high dimensionality, etc) has become overloaded and undifferentiated by too much sales and marketing, we mostly avoid it. Instead, we use the language of systems and feedback loops.
Again, we are trying to help your most senior engineers and execs develop a shared understanding of “What problem are we solving?” and “What is our goal? Technical terms can actually detract and distract from that shared understanding.
An Open Letter to CTOs:Why Organizational Learning Speed is Now Your Biggest Constraint. Organizations used to be limited by the speed of delivery; now they are limited by how swiftly they can validate and understand what they delivered.
Systems Thinking for Software Delivery. Observability is the signal that connects the dots to make a feedback loop; no observability,no loop. What happens to amplifying or balancing loops when that signal is lossy, laggy, or missing?
The Observability Landscape Through a Systems Lens. What feedback loops do developers need, and what feedback loops does ops need? How do these map to the tools on the market?
The Business Case for Observability. Is your observability a cost center or an investment? How should you quantify your RoI?
Diagnosing Your Observability Investment
The Organizational Shift
Build vs Buy (vs Open Source)
The Art and Science of Vendor Partnerships. Internal transformations run on trust and credibility; vendor partnerships run on trust and reciprocity. We’ll talk about both of these, as well as how to run a strong POC.
Instrumentation for Observability Teams
Where to Go From Here
Hey, I have a lot of empathy right now for leaders and execs who feel like they’re behind on everything. I feel it too. Anyone who doesn’t is lying to themselves (or their name is Simon Willison).
But the role observability plays in complex sociotechnical systems is one of those foundational concepts you need to understand. You’re not gonna get this right by accident. You’re not going to win by doing the same thing you were doing five years ago. And if you screw up your observability, you screw up everything downstream of it too.
To those of you who do understand this, and are working hard to drive change in your organizations: I see you. It is hard, often thankless work, but it is work worth doing. If I can ever be of help: reach out.
A longer book, but a better book
The last few chapters are heading into tech review on Friday, February 20th. Finally. The last 3.5 months have been some of the most panicky and stressful of my life. I….just typed several paragraphs about how terrible this has been, and deleted them, because you do not need to listen to me whine. ☺️
Like I said, I have never felt especially proud of the first edition. I am not UN proud, it’s just…eh. I feel differently this time around. I think—I hope—it can be helpful to a lot of different people who are wrestling with adapting to our new AI-native reality, from a lot of different angles.[2]
Thanks, Christine. (Another for the folder marked ”NOW YOU TELL ME”)
I am incredibly grateful to my co-authors, collaborators, and our editor, Rita Fernando, without whom I never would have made it through.
But there’s one more group that deserves some credit, and it’s…you guys. I asked for help, and help I got. So many people wrote me such long, thought-provoking emails full of stories, advice and hard-earned wisdom. The better the email, the more I peppered you with followup questions, which is a great way to punish a good deed.
Blame these people
I am a tiny bit torn on whether to say “thank you” or “fuck you”, because my life would have been much nicer if I had stuck to the plan and wrapped in October.
But the following list of people were especially instrumental in forcing me to rethink my approach. It made the book much stronger, so if you catch one of them in the wild, please buy them a stiff drink. (Or buy yourself one, and throw it in their face with my sincere compliments.)
Abraham Ingersoll, the aforementioned “odd CISO”, who would be quoted in the book had his advice not been so consistently unprintable by the standards of respectable publications
Benjamin Mann of Delivery Hero, who I would work for in a heartbeat, and not just for the way he wields “NOPE” as a term of art
Marty Lindsay, who has spent more time explaining POCs and tech evals to me than anyone should have to. (If you need an o11y consultant, Marty should be your very first stop).
Sam Dwyer, whose stories seeded my original plan to write a set of chapters for observability engineering teams. (I hope the replacement plan is useful too!)
Many others sent me terrific advice, and endured multiple rounds of questions and more questions and clarifications on said questions. A few of them:
Matthew Sanabria, Chris Cooney, Glen Mailer, Austin Culbertson, John Scancella, John Doran, Bryan Finster, Hazel Weakly, Chris Ziehr, Thomas Owens, Mike Lee, Jay Gengelbach, Will Hegedus, Natasha Litt, Alonso Suarez, Jason McMunn, Evgeny Rubtsov, George Chamales, Ken Finnegan, Cliff Snyder, Robyn Hirano, Rita Canavarro, Matt Schouten, Shalini Samudri Ananda Rao (Sam).
I am definitely forgetting some names; I will try to update the list as I remember them.
But seriously: thank you, from the bottom of my heart. I loved hearing your stories, your complaints, your arguments about how the world should improve. Your DNA is in this book; I hope it does you justice.
~charity
💜💙💚💛🧡❤️💖
[1] It’s ironic (and makes me uncomfortably self-conscious), but some of the worst top-down decision-making processes I have ever seen have come from companies where CEO and CTO are both former engineers. The confidence they have in their own technical acumen may be not wholly unfounded, but it is often ten or more years out of date. We gotta update those priors, my friends. Stay humble.
[2] On the other hand, as my co-founder, Christine Yen, informed me last week: “Nobody reads books anymore.”
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to:
First, Simon Wardley on building things without understanding how they work:
A few years ago, I attended a national conference on technological literacy… One of the main speakers, a sociologist, presented data he had gathered in the form of responses to a questionnaire. After a detailed statistical analysis, he had concluded that we are a nation of technological illiterates. As an example, he noted how few of us (less than 20 percent) know how our telephone works.
This statement brought me up short. I found my mind drifting and filling with anxiety. Did I know how my telephone works?
I squirmed in my seat, doodled some, then asked myself, What does it mean to know how a telephone works? Does it mean knowing how to dial a local or long-distance number? Certainly I knew that much, but this does not seem to be the issue here.
No, I suspected the question to be understood at another level, as probing the respondent’s knowledge of what we might call the “physics of the device.”I called to mind an image of a diaphragm, excited by the pressure variations of speaking, vibrating and driving a coil back and forth within a a magnetic field… If this was what the speaker meant, then he was right: Most of us don’t know how our telephone works.
Indeed, I wondered, does [the speaker] know how his telephone works? Does he know about the heuristics used to achieve optimum routing for long distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously? Does he know how many operators are needed to keep this system working, or what those repair people actually do when they climb a telephone pole? Does he know about corporate financing, capital investment strategies, or the role of regulation in the functioning of this expansive and sophisticated communication system?
Does anyone know how their telephone works?
There’s a technical interview question that goes along the lines of: “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using? Are you familiar with the relaxed memory model of the ARM processor? How garbage collection works inside of the JVM? Do you understand how the field effect transistors inside the chip implement digital logic?
I remember talking to Brendan Gregg about how he conducted technical interviews, back when we both worked at Netflix. He told me that he was interested in identifying the limits of a candidate’s knowledge, and how they reacted when they reached that limit. So, he’d keep asking deeper questions about their area of knowledge until they reached a point where they didn’t know anymore. And then he’d see whether they would actually admit “I don’t know the answer to that”, or whether they would bluff. He knew that nobody understood the system all of the way down.
In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all correct.
Wardley’s right that it’s dangerous to build things where we don’t understand the underlying mechanism of how they actually work. This is precisely why magic is used as an epithet in our industry. Magic refers to frameworks that deliberately obscure the underlying mechanisms in service of making it easier to build within that framework. Ruby on Rails is the canonical example of a framework that uses magic.
Jacob is right that AI is changing the way that normal software development work gets done. It’s a new capability that has proven itself to be so useful that it clearly isn’t going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but the benefits exceed the risks.
Perens is right that the scenario that Wardley fears has, in some sense, already come to pass. Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but those mental models are incorrect in fundamental ways.
Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
Hackers associated with the Chinese government used a Trojaned version of Notepad++ to deliver malware to selected users.
Notepad++ said that officials with the unnamed provider hosting the update infrastructure consulted with incident responders and found that it remained compromised until September 2. Even then, the attackers maintained credentials to the internal services until December 2, a capability that allowed them to continue redirecting selected update traffic to malicious servers. The threat actor “specifically targeted Notepad++ domain with the goal of exploiting insufficient update verification controls that existed in older versions of Notepad++.” Event logs indicate that the hackers tried to re-exploit one of the weaknesses after it was fixed but that the attempt failed.
During an exhibition, Japanese volleyball player Yuji Nishida hit a courtside judge in the back with an errant serve. He immediately sprinted across the court and dove prostrate in apology. The gesture was a sort of sliding dogeza:
Even in a country where a sincere apology can go a long way, Nishida’s mea culpa was an extreme example. The most extravagant form in Japanese culture is the dogeza, which can also be used to express deep respect.
When used as an apology, the person in the wrong prostrates themselves and bows so that their forehead touches the floor between their hands. While the dogeza is rarely seen in public, scandal-hit politicians have used equally theatrical gestures to communicate their remorse.
Nishida followed up his slide with several more bows.