Rent Prices Stickiness and the Latest CPI Data.

Fear of increasing inflation in the U.S. appear to be the trigger behind the market volatility of previous weeks. Recent gains in hourly compensation to workers have had analysts measuring the effect of wages on inflation. In turn, analysts began pondering changes in Fed’s monetary policy due to the apparent overheating path of the economy; which is believed to be mostly led by low unemployment rate and tight labor markets. Thus, within the broad measure of inflation, the piece that will help to complete the puzzle comes from housing market data. Although the item “Shelter” in Consumer Price Index was among the biggest increases for the month of January 2018, for technical definitions, its estimation does weight down the effect of housing prices over the CPI. Despite the strong argument on BLS’ imputation of Owner-Occupied Equivalent Rent, I consider relevant to take a closer look at the Shelter component of the CPI from a different perspective. That is, despite the apparent farfetched correlation between housing prices and market rents, it is worth visualizing how such correlation might hypothetically work and affect inflation. The first step in doing so is identifying the likely magnitude of the effect of house prices over the estimates and calculation of rent prices.

Given what we know so far about rent prices stickiness, Shelter cost estimation, and interest rates, the challenge in completing the puzzle consists of understanding the linking element between housing prices (which are considered capital goods instead of consumables) and inflation. Such link can be traced by looking at the relation between home prices and the price-to-rent ratio. In bridging the conceptual differences between capital goods (not measured in CPI) and consumables (measured in CPI) the Bureau of Labor Statistics forged a proxy for the amount a homeowner ought to pay if the house was rented instead: Owner-Occupied Equivalent Rent. This proxy hides the market value of the house by simply equaling nearby rent prices without controlling by house quality. Perhaps, Real Estate professional can shed light onto this matter.

The Setting Rent Prices by Brokers.

It is often said that rental prices do not move in the same direction as housing prices. Indeed, in an interview with Real Estate professional Hamilton Rodrigues from tr7re.com, he claimed that there is not such a relationship. Nonetheless, when asked about how he sets prices for newly rent properties, his answer hints at a link between housing prices and rent prices. Mr. Rodrigues’ estimates for rent prices equal either the average or the median of at least five “comparable” properties within a mile radius. The key word in Mr. Rodrigues statement is comparable. As a broker, he knows that rent prices go up if the value of the house goes up because of house improvements and remodeling. Those home improvements represent a deal-breaker from the observed stickiness of rent prices.

For the same reason, when a house gets an overhaul, one may expect a bump in rent price. That bump must reflect in CPI and inflation. I took Zillow’s data for December of 2017 for the fifty U.S. States, and run a simple linear OLS model. By modeling the Log of Price-to-Rent Ratio Index as a dependent outcome of housing prices -I believe- it will be feasible to infer an evident spillover of increasing house prices over current inflation expectations. The two independent variables are the Logs of House Price Index bottom tier and the Logs of House Prices Index top tier. I assume here that when a house gets an overhaul, it will switch from the bottom tier data set to the top tier data set.

Results and Conclusion.

The result table below shows the beta coefficients are consistent with what one might expect: the top tier index has a more substantial impact in the variation of the Price-to-Rent variable (estimated β₂= .12, and standardized β=.24, versus β=.06 for the Bottom tier). Hence, I would infer that overhauls might signal the link through which houses as a capital goods could affect consumption indexes (CPI and CEI). Once one has figured the effect of house prices on inflation, the picture of rising inflation nowadays will get clearer and more precise. By this means predictions on Fed tightening and accommodating policies will become more evident as well.

Recent Narratives of Stock and Bond Bubbles.

On February 5th, 2018, Dow Jones index fell 1,175 points after the trading day. Four economic scenarios are being analyzed in the news as of the first week of February 2018. First, there are indeed both Stock Market and bonds Bubbles. Second, the Monday Dow’s selloff is just an anticipated correction move on the investor’s side. Third, the stock market returns to the old normal of higher levels of volatility. Fourth, Trump economic effect. I need not to cover the concerns on the US economy nowadays in this blog post. Hence, the analysis that I think is needed currently is the ruling out of a contagious effect from the narratives created around the Dow’s selloff on Monday. Indeed, I believe that such narrative, if any, can be traced back to former chairman Alan Greenspan’s comments when he stated on January 31st that America has both a bond market and stock market bubbles. By discarding the contagious effect in current narratives, I side with analysts who have asserted that the Dow’s fall was just an anticipated market correction.

Can economists claim there is some association between Alan Greenspan’s comments and the Monday fall of the Dow Jones? I may not have an answer for that question yet, but We can look into the dynamics of the phenomenon to better understand how narratives could either deter or foster an economic crisis in early 2018. If there is room for arguing that Mr. Greenspan’s comments triggered the Dow Selloff on Monday, I believe we should be observing some sort of panic or manifestation of economic anxiety. By looking at data from Google Trends, I spot on breakouts that may well be understood as “spreading” symptoms. In other words, if there is any effect of Mr. Greenspan’s comments on the Dow’s selloff on Monday, we should expect to see an increase in Google searches for two terms: first “Alan Greenspan”, and second “Stock Market Bubble.” The chart below shows google trends indexes for both terms. Little to nothing can be said about the graph after a visual inspection of the data. It is hard to believe that there are narratives of economic crisis fast-spreading, nor have Mr. Greenspan’s comments had any effect on the Dow’s sell-off.

How did things occur?

Economists are lagging on the study of narratives, hence the limited set of appropriate analytics tools. Robert Shiller wrote early in 2017 that “we cannot easily prove that any association between changing narratives and economic outcomes is not all reverse causality, from outcomes to the narratives,” which is certainly accurate whenever time has passed as empirical evidence become obscure. However, on February 1st of 2018 mainstream media reported extensively a couple of statements made by Alan Greenspan about bubbles. In the following two days, several market indexes closed with relatively big loses. In detail, the events occurred as follows:

  1. On January 31st, 2018 Alan Greenspan told Bloomberg News: “There are two bubbles: We have a stock market bubble, and we have a bond market bubble.”
  2. On February 5th, 2018, Dow Jones index falls 1,175 point after the trading day on Monday.

Whenever these events happen, we all rush to think about Robert Shiller. As Paul Krugman cited Shiller today February 6th, 2018, “when stocks crashed in 1987, the economist Robert Shiller carried out a real-time survey of investor motivations; it turned out that the crash was essentially a pure self-fulfilling panic. People weren’t selling because some news item caused them to revise their views about stock values; they sold because they saw that other people were selling”. In other words, Robert Shiller’s work on Narrative Economics is meant for these types of conjectures. Narratives of economic crisis play a critical role in dispersing fear whenever economic bubbles are about to burst. One way to gauge the extent to which such a contagious effect occurs is by looking at google trend search levels.

 

 

No signs of fast-spreading economic crisis narratives:

Despite the ample airtime coverage, there is little to none evidence of a market crash and economic crisis. In the wave of fast pace breaking news announcing crisis and linking them to political personalities, markets seem just to be having an expected correction after an extended period of gains. The best way to conclude such correction is by looking at the firm numbers reported lately on jobs markets as well as to investigate the collective reaction to fear and expectations. Thus, four economic scenarios are being analyzed as of the first week of February. First, there are Stock Market and bonds Bubbles. Second, the Monday Dow’s selloff is just an anticipated correction move on the investor’s side. Third, the market returns to the old normal of higher levels of volatility. Fourth, Trump effect. None of the scenarios seem plausible to me. First, the selloff appears not to have dug into the investors and people’s minds, thereby avoiding the contagious effect. Second, despite the unreliability of winter economic statistics, jobs reports on January 2018 seem optimistic (I think they will revise those number low). Third, claiming volatility is back to the stock market is like claiming Trump is back into controversy. Therefore, the only option left to explain Monday’s selloff is the argument of a market correction.

The overuse of the word “Strong” in economic news.

The US economy added 228,000 new jobs in November of 2017 and analysts rush to assess the state of the economy as “STRONG.” Although the job reports are indeed good indicators of the performance of the US economy, one should not simplify the job report as the snapshot of the economy that allows for those “strong” conclusions by and in itself. In this post, I show that despite the existence of a cointegration vector between unemployment rate data and the word-count of the word “Strong” in the Beige Book, journalists indeed overuse the word “Strong” in headlines. Although interpreting cointegration as elasticity goes beyond the scope of this post, I think that by looking at the cointegration relation it is safe to conclude that the current word count does not reflect the “strong” picture showed by the media, but somewhat more moderate economic conditions.

To start, let me go back to the first week of December of 2017. Back then, news outlets had headlines abusing the word “strong.” Some examples came from major newspapers in the US such as the New York Times, Reuters, CNN and the Washington Post. The following excerpts are just a sample of the narrative seen those days:

“The American economy continues its strong performance” (CNN Money).

“The economy’s vital signs are stronger than they have been in years” (NY Times).

“Strong US job growth in November bolsters economy’s outlook” (Reuters).

“These are really strong numbers, which is pretty exciting…” (Washington Post).

Getting to know what is happening in the economy challenges economists’ wisdom. Researchers are constrained by epistemological limits of data and reality, and so are journalists. To understand economic conditions, researchers utilize both quantitative and qualitative data while journalists focus on qualitative most of the times. Regarding qualitative data, the Beige Book collects anecdotes and qualitative assessments from the twelve regional banks of the Federal Reserve system that may help news outlets to gauge news statements and headlines. The Fed studies business leaders, bank employees, among other economic agents to gather information about the current conditions of the US economy. As a Researcher, I counted the number of times the word “strong” shows up in the Beige Book starting back in 2006. The results are plotted as follow:

If I were going to identify a correlation between the word count of “strong” and its relation to the unemployment rate, it would be very hard to do so by plotting the two lines simultaneously. Most of the times, when simple correlations are plotted, the dots show any relation between the two variables. However, in this case, cointegration goes a little deeper into the explanation. The graph below shows how the logs of both variables behave contemporarily over time. They both decrease during the Great Recession as well as they increase right after the crisis started to end. However, more recently both variables began to divert from each other, which makes it difficult to interpret, at least in the short run.

Qualitative data hold some clues in this case. Indeed, the plot shows a decreasing trend in the use of the word within the Beige Book. In other terms, as journalists increase its use in headlines and news articles, economists at the Federal Reserve Bank decrease the use of the word “strong”. If I were going to state causality from one variable to the other, first I would link the word “strong” to some optimism for expected economic outcomes. Thereby, one should expect a decrease in the unemployment rate as the use of the word “strong” increases. This is a classic Keynesian perspective of the unemployment rate. Such relation of causality might constitute the cointegration equation that the cointegration test identifies in the output tables below. In other words, the more you read “strong,” the more employers hire. By running a cointegration test, I can show that both variables are cointegrated over time. That is, there is a long-term relationship between both variables (both are I(1)). The cointegration test shows that at least there exists one linear combination of the two variables over time.

The difficulty with the overuse of the word nowadays is that the word is not being used by economists in the Federal Reserve at the same pace as journalist economists do. In fact, the word-count has decreased drastically for the last two years from its peak since 2015. Such mismatch may create false expectations about economic growth, sales and economic performance that may lead to economic crisis.

Raising economic expectations with the “after-tax” reckon: President Trump’s corporate tax cut plan.

The series of documents published by the White House Council of Economic Advisers indicate that President Donald Trump’s Tax Reform will end up being his economic growth policy. The most persuasive pitch behind the corporate tax cut is that lowering taxes to corporations will foster economic investment thereby economic growth. Further, the political rhetoric refers to GDP growth estimates of a tax-cut-boosted 3 to 5 percent growth in the long run. In supporting the corporate tax cut, the White House Council of Economic Advisers presented both a theoretical framework and some empirical evidence of the effects of tax cuts on economic growth. Even though the evidence presented by the CEA is sound and right, after reading the document, any analyst would promptly notice that the story is incomplete and biased. In this blog post, I will briefly point to the incompleteness of White House CEA’s tax cut policy justification. Then, I will show that the alleged “substantial” empirical evidence meant to support the corporate tax-cut policy is insufficient as well as flawed. In third place, I will make some remarks on the relevance of the tax-cut as a fiscal policy tool in balance to the current limitation of monetary policy. Finally, I conclude that despite the short-term benefits of the corporate tax cut, such benefits are temporal as the new normal rate settles, and at the end of the day, given that tax policy cannot be optimized, setting expectations from the administration is a policy waste of time.

The very first policy instance that CEA stresses in its document is the fact that corporate tax cut does affect economic growth. Following CEA’s rationale of current economic conditions, the main obstacle to GDP growth rates above 2 percent is low rates of private fixed investment. CEA infers implicitly that the user cost of capital far exceeds profit rates. In other words, profit rates do not add up enough to cover for depreciation and wear off capital investments. Thus, if private investment depends on expected profit as well as depreciation, simply put I_t=I(π_t/(r_t+ δ)) where the numerator is profit, and the denominator is the user cost of capital (Real Interest rate plus depreciation), the quickest strategy to alter the equation is by increasing profit through lowering on fixed cost such as taxes. CEA’s rationale assumes correctly that no one can control depreciation of capital goods, and wrongly thinks that no one (including the Federal Reserve which faces serious limitations) can control real interest rate, currently.

CEA fetched some data from the Bureau of Economic Analysis to demonstrate that private sector Investment is showing concerning signals of exhaustion. The Council sees a “substantial” weakness in equipment and structures investments. More precisely, CEA remarks that both equipment and structure investment have declined since 2014. Indeed, both variables show a decline in levels of 2 and 4 percent respectively. However, and although CEA considers such decline worrisome, those decreases seem not extraordinary for the variables to develop truly policy concerns. In fairness, those variables have shown sharper decreases in the past. The adjective “substantial,” which justifies the corporate tax cut proposal, is fundamentally flawed.

The problem with the proposal is that “substantial” does not imply “significant” statistically speaking. In fact, when put in econometric perspective, one of those two declines does not appear to be statistically different from the mean. In other words, the two declines look perfectly as a natural variation within the normal business cycle. A simple one sample t-test will show the incorrectness of the “substantial” reading of the data. A negative .023 change (p=.062), in Private fixed investment in equipment (Non-Residential) from 2015 to 2016, is just on the verge of normal business (M=.027, SD=.097), when alpha level is set to .05. On the other hand, a negative .043 change (p=.013) in Private fixed investment for nonresidential structures stands out of the average change (M=.043, SD= .12), but still, it is too early to claim there is a substantial deacceleration of investments.

Thus, if the empirical data on investment do not support a change in tax policy, then the CEA tries to maneuver growth by policy expectations. Their statements and publications unveil the desire to influence agents’ economic behavior by reckoning with the “after-tax” condition of expected profit calculations. Naturally, the economic benefits of corporate tax cuts will run only in the short term as the new rate becomes the new normal. Therefore, the benefit of nominally increasing profits will just boost profit expectations in the short term while increasing the deficit in the long run. Ultimately, the problem of using tax reform as growth policy is that tax rates cannot be controlled for optimization. Unlike interest rate, for numerous reasons, governments do not utilize tax policy as a tool for influencing either markets or economic agents.

 

Implications of “Regression Fishing” over Cogent Modeling.

One of the most recurrent questions in quantitative research refers to on how to assess and rank the relevance of variables included in multiple regression models. This type of uncertainty arises most of the time when researches prioritize data mining over well-thought theories. Recently, a contact of mine in social media formulated the following question: “Does anyone know of a way of demonstrating the importance of a variable in a regression model, apart from using the standard regression coefficient? Has anyone had any experience in using Johnson’s epsilon or alternatives to solve this issue? Any help would be greatly appreciated, thank you in advance for your help”. In this post, I would like to share the answer I offered to him by stressing the fact that what he wanted was to justify the inclusion of a given variable further than its weighted effect on the dependent variable. In the context of science and research, I pointed out to the need of modeling appropriately over the mere “p-hacking” or questionable practices of “regression fishing”.

[wp-paywall]

What I think his concern was all about and pertained to is modeling in general. If I am right, then, the way researchers should tackle such a challenge is by establishing the relative relevance of a regressor further than the coefficients’ absolute values, which requires a combination of intuition and just a bit of data mining. Thus, I advised my LinkedIn contact by suggesting how he would have almost to gauge the appropriateness of the variables by comparing them against themselves, and analyze them on their own. The easiest way to proceed was scrutinizing the variables independently and then jointly. Therefore, assessing the soundness of each variable is the first procedure I suggested him to go through.

In other words, for each of the variables I recommended to check the following:

First, data availability and the degree of measurement error;

Second, make sure every variable is consistent with your thinking –your theory;

Third, check the core assumptions;

Fourth, try to include rival models that explain the same your model is explaining.

Now, for whatever reason all variables seemed appropriate to the researcher. He did check out the standards for including variables, and everything looked good. In addition, he believed that his model was sound and cogent regarding the theory he surveyed at the moment. So, I suggested raising the bar for decanting the model by analyzing the variables in the context of the model. Here is where the second step begins by starting a so-called post-mortem analysis.

Post-mortem analysis meant that after running as much regression as he could we would start a variable scrutiny for either specification errors or measurement errors or both. Given that specification errors were present in the model, I suggested a test of nested hypothesis, which is the same as saying that the model omitted relevant variables (misspecification error), or added an irrelevant variable (overfitting error). In this case the modeling error was the latter.

The bottom line, in this case, was that regardless of the test my client decided to run, the critical issue will always be to track and analyze the nuances in the error term of the competing models.

I recognized Heteroscedasticity by running this flawed regression.

In a previous post, I covered how heteroscedasticity “happened” to me. The anecdote I mentioned mostly pertains to time series data. Given the purpose of the research that I was developing back then, change over time played a key factor in the variables I analyzed. The fact that the rate of change manifested over time made my post limited to heteroscedasticity in time series analysis. However, we all know heteroscedasticity is also present in cross-sectional data. So, I decided to write something about it. Not only because did not I include cross-sectional data, but also because I believe I finally understood what heteroscedasticity was about when I identified it in cross-sectional data. In this post, I will try to depict, literally, heteroscedasticity so that we can share some opinions about it here.

As I mentioned before, my research project at the moment was not very sophisticated. I had said that I aimed at identifying the effects of the Great Recession in the Massachusetts economy. So, one of the obvious comparisons was to match U.S. states regarding employment levels. I use employment levels as an example given that employment by itself creates many econometric troubles, being heteroscedasticity one of them.

The place to start looking for data was U.S. Labor Bureau of Statistics, which is a nice place to find high quality economic and employment data. I downloaded all the fifty states and their jobs level statistics. Here in this post, I am going to restrict the number of states to the first seventeen in alphabetical order in the data set below. At first glance, the reader should notice that variance in the alphabetical array looks close to random. Perhaps, if the researcher has no other information -as I often do- about the states listed in the data set, she may conclude that there could be an association between the alphabetical order of States and their level of employment.

Heteroscedasticity 1

I could take any other variable (check these data sources on U.S. housing market) and set it alongside employment level and regress on it for me to explain the effect of the Great Recession on employment levels or vice versa. I could find also any coefficients for the number of patents per employment level and states, or whatever I could imagine. However, my estimated coefficients will always be biased because of heteroscedasticity. Well, I am going to pick a given variable randomly. Today, I happen to think that there is a strong correlation between Household’s Pounds of meat eaten per month and level of employment. Please do not take wrong, I believe that just for today. I have to caution the reader; I may change my mind after I am done with the example. So, please allow me to assume such a relation does exist.

Thus, if you look the table below you will find interesting the fact that employment levels are strongly correlated to the number of Household’s pound of meat eaten per month.

Heteroscedasticity 2

Okay, it is clear that when we array the data set by alphabetical order the correlation between employment level and Household’s Pounds of meat eaten per month is not as clear as I would like it to be. Then, let me re-array the data set below by employment level from lowest to the highest value. When I sort out the data by employment level, the correlation becomes self-evident. The reader can see now that employment drives data on Household’s Pounds of meat eaten per month up. Thus, the higher the number of employment level, the greater the number of Household’s Pounds of meat consumed per month. For those of us who appreciate protein –with all due respect for vegans and vegetarians- it makes sense that when people have access to employment, they also have access to better food and protein, right?

Heteroscedasticity 3

In this case, given that I have a small data set I can re-array the columns and visually identify the correlation. If you look at the table above, you will see how both growth together. It is possible to see the trend clearly, even without a graph.

But, let us now be a bit more rigorous. When I regressed Employment levels on Household’s Pounds of meat eaten per month, I got the following results:

Heteroscedasticity 4

After running the regression (Ordinary Least Squares), I found that there is a small effect of employment on consumption of meat indeed; nonetheless, it is statistically significant. Indeed, the regression R-squared is very high (.99) to the extent that it becomes suspicious. And, to be honest, there are in fact reasons for the R-squared to be suspicious. All I have done was tricking the reader with a fake data on meat consumption. The real data behind meat consumption used in the regression is the corresponding state population. The actual effect in the variance of employment level stems from the fact that states do vary in population size. In other words, it is clear that the scale of the states affects the variance of the level of employment. So, if I do not remove size effect from the data, heteroscedasticity will taint every single regression I could make when comparing different states, cities, households, firms, companies, schools, universities, towns, regions, son on and so forth. All this example means that if the researcher does not test for heteroscedasticity as well as the other six core assumptions, the coefficients will always be biased.

Heteroscedasticity 5

For some smart people, this thing is self-explanatory. For others like me, it takes a bit of time before we can grasp the real concept of the variance of the error term. Heteroscedasticity-related mistakes occur most of the time because social scientists look directly onto the relation among variables. Regardless of the research topic, we tend to forget to factor in how population affects the subject of our analysis. So, we tend to believe that it is enough to find the coefficient of the relation between, for instance, milk intake in children and household income without considering size effect. A social scientist surveying such a relation would regress the number of litters of milk drunk by the household on income by family.

The Current Need for Mixed Methods in Economics.

Economists and policy analysts continue to wonder what is going on in the U.S. economy currently. Most of the uncertainty stems from both the anemic pace of economic growth as well as from fears of a new recession. In regards to economic growth, analysts point out to sluggish changes in productivity, while fears of new recessions derive from global markets (i.e. Brexit). Unlike fears from a global economic downturn, the previous issue drives many hypothesis and passions given that action relies on fiscal and monetary policy further than just market events. Hence, both productivity and capacity utilization concentrate most of the attention these days on newspapers and op-eds. Much talk needs to undergo public debate before the economists’ community could pinpoint the areas of the economy that require an urgent overhaul; indeed, I would argue that analysts need to get out there and see through not conventional lens how tech firms struggle to realize profits. Mixed methods in research would offer insights of what is holding economic growth lackluster.

Why do economists sound these days more like political scientists?

Paradoxically enough, politics is playing a key role in unveiling circumstances that otherwise economists would ignore, and it is doing so by touching the fiber of the layman’s economic situation. The current political cycle in the U.S. could hold answers for many of the questions economists have not been able to address lately. What does that mean for analysts and economists? Well, the fact that leading economists sound these days more like political scientists than actual economists means that the discipline must make use of interdisciplinary methods for fleshing out current economic transformations.

Current economic changes, in both the structure of business as well as the structure of the economy, demand a combination of research approaches. At first instance, it is clear that economists have come to realize that traditional data for economic analysis and forecast have limitations when it comes to measuring the new economy. That is only natural as most economic measures were designed for older economic circumstances surrounding the second industrial revolution. Although traditional metrics are still relevant for economic analysis, current progress in technology seems not to be captured by such a set of survey instruments. That is why analysts focusing on economic matters these days should get out and see for themselves what data cannot capture for them. In spite of the bad press in this regard, no one could argue convincingly that Silicon Valley is not adding to productivity in the nation’s businesses. Everyone everywhere witnesses how Silicon Valley and tech firms populate the startup scene. Intuitively, it is hard to deny that there are little to none gains from tech innovation nowadays.

Get out there and see how tech firms struggle to realize profits.

So, what is going on in the economy should not be blurred by what is going on with the tools economists use for researching it. One could blame the analysts’ incapability of understanding current changes. In fact, that is what happens first when structural changes undergo economic growth, usually. Think of how Adam Smith and David Ricardo fleshed out something that nobody had seen before their time: profit. I would argue that something similar with a twist is happening now in America. Analysts need to get out there and see how tech firms struggle to realize profits. Simply put, and albeit generalizations, the vast majority of newly entrepreneurs do not know yet what and how much to charge for new services offered through the internet. Capital investment in innovative tech firms ventures most of the times without knowing how to monetize services. This situation exacerbates amid a hail of goods and services offered at no charge in the World Wide Web, which could prove that not knowing how to charge for services drives current stagnation. Look at the news industry for a vivid example.

Identifying this situation could shed light onto economic growth data as well as current data on productivity. With so much innovation around us, it is hard to believe that productivity is neither improving nor contributing to economic growth in U.S. Perhaps, qualitative approaches to research could yield valuable insights for analysis in this regard. The discipline needs desperately answers for policy design, and different approaches to research may help us all to understand actual economic transformations.

U.S. economic slowdown? Look at Real Estate labor market.

One month of weak payroll data does not make a crisis. The US economy appears to have added only 160,000 new jobs during the month of April in 2016, the Bureau of Labor Statistics reported on Friday. A similar number was published earlier in that week by the payroll firm ADP. Although the slowdown in hiring came from local and federal government (-11,000) as well as from mining (8,000), the sector that should get more attention is Real Estate and Leasing Services. Indeed, this sector could be revealing what is happening in the current economic conditions.

For the last economic quarter, analysts have seen employment growth being incongruent when compared to GDP growth. And now that the employment payroll looks weak, many analysts would like to rush and call out an economic recession. However, it is too soon for asserting anything akin a crisis mainly because the slowdown in hiring came from local and federal government (-11,000) as well as from mining (8,000). Those two sectors were expected not to grow given that oil prices are still low, and the electoral cycle continues. Retail trade also failed to add jobs at the same pace the sector was adding during the past three months, but the -3,000 jobs slowdown is not alarming since the industry’s previous growth was strong.

Otherwise, the sector that should get more attention is Real Estate and Leasing Services as the spring season brings business to their stores. Establishing how busy real estate agents are around this time of the year could shed light onto how the economy is running actually for two reasons. Not only because weather season affects their business cycle, but also because their business depends highly on the interest rate. In fact, Real Estate labor market seems anemic lately. The sector’s change over the month of April seems to have added about 600 new jobs, which is certainly poor for what the season should have demanded.

The fact that housing sales depend on interest rates allows for inferences on how expectations on Federal Reserve bonds influence the job market. In other words, the anemic employment growth in Real Estate appears not to derive from a sluggish demand for housing as it does from interest rate expectations. Thus, persistent market speculations on rising interest rates could have had an effect on current consumer expectation on both housing and consumption. Therefore, it seems logical to think that because of that companies halted hiring in April, especially the Real Estate ones. Only time will unveil the outcome though.

The focus right now is on the next meeting of the Federal Open Market Committee in which monetary policy maker will decide again whether to increase the rates or leave them unchanged.

Unemployment √. Inflation √. So… what is the Fed worrying about?

Although the Federal Open Market Committee (hereafter FOMC) March’s meeting on monetary policy focused on what apparently was a disagreement over the timing for modifying the Federal Bonds interest rates, the minutes indicate that the disagreement is not only on timing issues but also on exchange rate challenges. Not only does the Fed struggle with when the best moment is to raise the rate, but also it grapples with the extent to which its policy decisions can reach. The FOMC current economic outlook and their consensus on the state of the U.S. economy have no room for doubts on domestic issues as it does for uncertainties on foreign markets. Thus, the minutes of the meeting held in Washington on March 15th – 16th 2016 unveils an understated intent for influencing global markets by stabilizing the U.S. currency. On one hand, both objectives of monetary policy seem accomplished regarding labor markets and inflation. On the other, the global deceleration is the only factor that concerns the Fed since it could have adverse spillovers on America. The most recent monetary policy meeting reveals a subtle attempt to stabilize the U.S dollar exchange rate at some level, thereby favoring American exports.

Unemployment rate √. Inflation rate √.

The institutional objective of the Federal Reserve Bank seems uncompromised these days. Economic activity is picking up overall, the labor market is at desired levels, and inflation seems somewhat under control. The confidence economists have right now starts by the U.S. Household Sector. Household spending looks healthy, and officials at the Bank are confident such spending will keep on buoying labor markets. As stated in the minutes, “strong expansion of household demand could result in rapid employment growth and overly tight resource utilization, particularly if productivity gains remained sluggish” (Page 6). Indeed, the labor market is showing strong gains in employment level which has made the unemployment rate to decrease down to 5.0 percent by the end of the first quarter of 2016.

Furthermore, FOMC understands the high levels of consumer confidence as a warranty for a sustained path for growth. The committee also pointed out that low gasoline prices are stimulating not only higher level of consumption but also motor vehicles sales. They know of the excellent situation of the relative high household wealth to income ratio. Otherwise, members of the Committee recognize that regions affected by oil prices are starting to struggle while business fixed investment shows signs of weakening. Nevertheless, the consensus among members of the Committee reflects an overall optimism in the resilience of the economy rather than a worrisome situation about the outlook.

By Catherine De Las Salas

By Catherine De Las Salas

The fear comes from overseas.

The transcripts, which were released on April 6th, 2016, show that  Fed officials the concerns stem from global economic and financial developments. The FOMC “saw foreign economic growth as likely to run at a somewhat slower pace than previously expected, a development that probably would further restrain growth in U.S. exports and tend to damp overall aggregate demand” (Pag. 8). They also flagged warnings on wider credit spreads on riskier corporate bonds. In sum, policymakers at the FOMC interpret the current lackluster global situation as a threat to the economic growth of the United States.

To discard choices.

Therefore, the fact that those two conditions overlap has made the Committee anxious to intervene in an arena that perhaps could be out of its reach. By keeping unmoved the interest rate of the federal bonds during March -and perhaps doing so until June-, the FOMC does not aim at stimulating investment domestically. Nor does it at controlling inflation. In fact, the policy choice reveals a subtle attempt for keeping the U.S dollar exchange rate stable overseas, thereby favoring American exports. The latter statement could be inferred from the minutes based on the Committee’s consensus on the state of the economy. First, U.S. labor markets are strong, and the Fed considers that the actual unemployment rate corresponds to the longer-run estimated rate. Second, inflation –either headline or core- are projected and expected to be on target. And third, domestic conditions are in general satisfactory. The only factor that remains risky is the rest of the world. Therefore, whatever action they took last March meeting could be interpreted as intended for influencing global markets.