As an engineer, I analyze and solve problems everyday. These problems are usually pretty narrow and specific to what I'm designing at work, so they don't make good examples for talking about analysis. But the general principles I use are valid for pretty much any problem out there. I've found that examples of these principles of analysis abound in the economics issues facing the US today, so to give them more meaning, we can link each of these principles to a major economic issue. The intent here is not to solve the country's economic problems in one article, but to get a sense of how analysis can be used to understand big, real-world problems that everyone is likely to be aware of. Then you can apply those principles to the more specific problems that you deal with on a daily basis.
Context
To even hope to begin solving a problem, first you must understand the problem. No problem in the real world happens in a vacuum, and neither should our decisions about how to solve those problems. Context is the information about the problem that should keep you from blindly adhering to a pet solution. We all have solutions that we fall back on because it's what we know, but just because a solution might have been right in one situation, doesn't mean it will be right in all situations. Even if the conditions look fairly similar, something might have changed to make the previous decision wrong in the current context.
The obvious economic example of the need for context is the current state of the Federal short-term interest rate, which is stuck at basically zero and has been for the past six years. In normal times, such a policy would have resulted in soaring inflation, and indeed, many knowledgeable people predicted this outcome. But, as Paul Krugman keeps trying to tell us, we are not living in normal times. Current context trumps general rules for how the economy behaves. The housing bubble that burst in 2008 left a lot of consumers in mountains of debt and a lot of businesses without customers. That set of circumstances turns most of macroeconomics on its head, so to make progress on the solutions that will fix the economy, one must first understand the context of the problem we're dealing with now. Keep that in mind the next time you're trying to solve a problem that you think you've solved before.
Trade-offs
Once you know the context, it's time to understand the most important aspects of the problem and how they relate to each other. Some aspects of a problem will be in direct opposition to each other. Other issues form a group that cannot all be optimized at once. If you could optimize everything with no downsides, there would be no problem, right? There are trade-offs inherent to any problem, and optimizing those trade-offs within the context that defines your problem requires sharp judgement.
Taxes are a good example of a problem with trade-offs at multiple levels. How much to tax is an issue. Should we tax more and provide more government services, or should we tax less and cut government spending? Who to tax is an issue. Should we tax everyone at the same rate, or have a progressive tax system? Should we tax certain people less, like people who have children, have a mortgage payment, or donate to charity? What to tax is an issue. Should we tax income, capital, wealth, consumption, bad habits, use of services, or all of the above? Why we tax is even an issue. Should we tax to raise revenue, redistribute income, or regulate businesses?
There are tons of trade-offs when figuring out a tax system, and these are just the big ones. No wonder tax systems are a mess. At least knowing and understanding the trade-offs can help guide you to a better solution to the problem. You can also rest easier knowing that your programming problems are no where near as complex as the US tax system.
Magnitude
When you're analyzing a problem, be sure to consider what orders of magnitude you're working with for different aspects of the problem. If you're considering different options whose benefits are of completely different magnitudes, you're going to want to know that. Similarly, if a particular option has negative consequences that are of a different magnitude than the benefits, you're going to want to know that, too. Being able to judge the magnitude of different options and their consequences enables you to make much better decisions.
Take the minimum wage debate. Some people claim that raising the minimum wage will raise prices, negating the increased wages, but it's a question of magnitude. Sure, doubling everyone's salary in the country is likely to double prices for everything, kicking off severe inflation. That's assuming that such a thing would even be possible, let alone desirable, but raising the minimum wage will only raise some fraction of worker's wages. Some prices may go up, but they'll go up much less than the worker's wages that were at the old minimum, making it an overall win for them financially while not making it that much harder on the rest of us.
When taking the magnitudes of change into account, raising the minimum wage looks much more benign with regards to price levels. The same holds true in many software development problems, especially when it comes to performance optimization. Adding what seems like a significant amount of processing could still result in better performance if it eliminates communication to disk or over a network. Knowing the magnitudes involved when deciding on trade-offs is essential.
Correlations
Interpreting a correlation as causation is a special kind of assumption, and like assumptions, these interpretations can get you into trouble. Having two things be correlated simply means that they move together, either in the same direction or opposite directions. A correlation doesn't say anything about which variable is dependent on the other, or if another invisible variable is involved and forming a more complicated dependence with the other two.
As an example, studies have found poverty to be correlated with single parenthood. As a result, various people advocate for programs that promote marriage as a solution to poverty. That assumes that single parenthood causes poverty, but it could very well be the other way around. If that is the case, then marriage promotion programs aren't going to be very successful at reducing poverty.
Understanding correlations will help you focus your attention on the things that will have the largest impact on the problem you're trying to solve. It's worth the time to make sure your interpretation of the correlation is accurate, and that you're moving the right levers to fix your problem.
Interconnections
Beyond correlations, there are also the general interconnections within your problem domain to consider. Changing something in one place could have far-reaching and unforeseen effects on seemingly unrelated areas of the problem domain due to poorly understood connections that run through the domain. Making the effort to understand the interconnections within your domain can give you much better insight into the issues at play, even if you can't be confident of how things will play out. You can at least have an idea of what to expect and be more prepared if things go wrong.
Take the recent drop in oil prices, for example. At first glance falling oil prices seems like a great thing for most people, but predicting how changing oil prices will effect the economy is more complicated than you would expect due to the web of connections that oil prices have within our economy. Lower oil prices may make oil production uneconomical for companies in Texas, Louisiana, and North Dakota, increasing unemployment in those areas. If this is a strategy of companies in the Middle East to put US oil companies out of business, then it could increase our dependence on foreign oil in the long run. It can also increase our consumption of a resource that's becoming more limited while reducing the incentive to develop alternatives like solar energy and EVs, not to mention the increased pollution that would result. When the wells run dry faster than anticipated, we may be in for a shock as gasoline prices skyrocket and we're caught unprepared.
I'm not predicting that this scenario will actually come to pass, but the interconnections are there to make it possible. Being aware of that, and at least analyzing and preparing for such a possibility would be a prudent thing to do. The same reasoning goes for problems in any domain. You should at least be aware of the interconnections, even if the future is uncertain, so that you can make contingency plans and be prepared to handle undesirable situations if they come up.
Irreducible Complexity
Sometimes you want to simplify a situation, to make your problem easier to deal with, but you can't. Some problems are too complex, have too many interconnections, and have intricate dependencies that cannot be decoupled. These complex problems require sufficiently complex solutions. If you try to simplify the solution too much, it will no longer solve the problem, and in some cases can actually make the problem worse.
The US health care system is one such beast. If anything has irreducible complexity, it's health care. Between patients, the insured, the uninsured, insurance providers, care providers, medical device manufacturers, pharmaceutical companies, and the government, there are thousands of businesses, millions of people, and trillions of dollars involved. A system of this scale is bound to waste some money, but it is pretty well documented that the US wastes much more than it should on health care. Any system that attempts to reduce the waste in US health care is going to be complicated. Even the first step of figuring out where all of the waste is in the system is complicated. It's no wonder the ACA is so complicated. The real question is, does the complexity of the ACA address the irreducible complexity of the US health care system?
When analyzing a problem and trying to simplify it in order to develop a tractable solution, it's important to be aware of when you've hit the point of irreducible complexity. Every problem has a complexity threshold, and if you oversimplify the problem, the solution is not going to fit. Part of good analysis is identifying where the point of irreducible complexity is, and solving for that level of complexity.
FUD
At some point during the analysis of a problem, especially as the problem gets more complicated, you may encounter FUD (fear, uncertainty, and doubt). Someone—or many people—starts thinking, what if this solution doesn't work? What if it has unintended consequences? What if it blows up in our faces? We shouldn't do it! It's too risky!! Look, because of this thing right here, the whole system is going to collapse, and the world will end!!! At this point you need to take a deep breath and calmly analyze if these claims have any merit. Maybe the solution is too risky for various reasons, but the correct decision cannot be reached through the emotions of FUD.
One recent example of FUD was the semi-successful NASA mission to land the space probe Philae on a comet. While not exactly an economic example, plenty of money was spent on the mission with significant risks to success because of FUD. The mission was successful in that Philae did indeed land on the comet, but due to a couple of malfunctions with its landing gear, it bounced around and came to rest at the base of a cliff, making the solar panels it was equipped with ineffective as an energy source. If Philae had used plutonium-238—an entirely safe, non-weapons-grade radioisotope—as its power source, the fudged landing wouldn't have mattered, and Philae would have had enough power to take all of the measurements NASA intended as the comet approached the sun.
The problem is that plutonium-238 has the word "plutonium" in it, so enough people are afraid of it without understanding it and realizing how benign it is. If reasonable analysis could have been used in this case to overcome the FUD, we may have gained much more insight into the properties of comets, greatly increasing our knowledge of another piece of the universe. Try not to let FUD get in the way of your analysis of potential solutions to your problems.
Counterexamples
One way to counteract FUD is to find counterexamples. Are there actual examples that dispute the opposition's claims? If your opponents are claiming that a solution can't work because of some general principle, finding a counterexample can cause their argument to collapse and force everyone to look at the context and trade-offs at hand instead of speaking in generalities.
As an example, let's turn back to taxes. Americans generally believe raising taxes on the middle class and the poor reduces work incentives, resulting in higher unemployment. If you don't get to keep as much of your hard-earned money, then what's the point in working so much, right? (Bonus: can you see the logical flaw in this argument?) We can actually test this theory because there are plenty of countries with much higher taxes than the US. Do we actually see reduced employment in countries with higher taxes? Scandinavian countries are a glaring counterexample to this theory, so there must be more in play than the general claim that high tax rates equals lower employment.
Counterexamples are good for deciding whether or not general rules apply in particular situations. If counterexamples exist that disprove the general rule, then you should look more closely at the context of the problem at hand to decide what the best solution will be.
Counterfactuals
Counterfactuals are similar to counterexamples, except that you're not looking for a real example to disprove a general rule; you're doing a mental comparison of what happens to a system in the presence or absence of something. It's very easy to take a one-sided view that something does (or doesn't) work without looking at the opposite situation. Doing the comparison leads to much more sound analysis and better insights.
A good example of the benefit of counterfactuals comes from the Obama stimulus package in early 2009. The economy was still cratering when the ARRA went into effect, and unemployment peaked at 10%, well above the estimates coming out of the Obama administration when they were promoting the predicted benefits of the stimulus. It's very easy to conclude that the stimulus didn't work, but that ignores what would have happened without the stimulus. Yes, 10% unemployment was way above the administration's estimates, but the estimates were just wrong. That doesn't mean the stimulus didn't work. It means without the stimulus, unemployment would have been much higher. In fact, nearly all economists responding to a recent survey agreed that the stimulus did reduce unemployment.
Counterfactuals are very important when analyzing a problem. Adding and removing different parts of a system and comparing the expected outcomes is a great way to gain a better understanding of trade-offs and interconnections. Such an exercise may show you that even though a particular solution didn't fix the problem, it helped, and it may be worth looking at extending the solution instead of discounting it.
Alternative Solutions
An analysis is never complete if you only look at one possible solution. Having a collection of alternative solutions to compare and choose from is critical to good analysis. Coming up with alternatives really brings together all of the other points listed above. The best solutions will fit the context of the problem and optimize the trade-offs for your particular goals, taking the magnitudes of various effects into account. The correlations, interconnections, and complexity of the problem domain will bring out different solutions and highlight the strengths and weaknesses of them. Alternative solutions will include counterexamples and counterfactuals to help combat FUD. In the end, if you've done your analysis well, you'll have a good selection of alternatives to choose from, and you'll be more confident of deciding on the best solution.