The use of XLStat time series analysis package with a Video Games Database example.

(Disclosure: The first part of the post is a note intended to promote Xlstat as a software add-in.)

I have been asked to write a post about my experience with XLStat thus far, and I must state that I love the time-series analysis advanced package entirely. I side with people who advocate for simplification of scientific understanding. One of my favorite internet-viral-meme quotes reads that “if you can’t explain it simply, you do not understand it well enough.” Well, when it comes to data analysis, I think that we have two options, either we simplify analyses, or we make them intricate. That is precisely what Xlstats allows you to do. It simplifies the modeling process in time series analysis so that you can focus on the things that matter to you the most. Time as a resource is scarce, and we all need to make choices as we work and learn. In this post, I will fit a model that can have at least two methods for analysis and can easily derail attention from the interpretation of the econometric findings onto a fruitless methodological discussion about the choices the analyst could have made.

As the reader will notice, the series behaves as an inverted parabola for the most part. Such a descriptive feature may lead the analyst to consider a non-linear model at first glance, even though the underlying relationship between the variables could be just linear. The second choice could be fitting a simple OLS autoregressive model. The first scenario will require programming skills given that parabolas are not invertible matrices. I believe that if the analyst chooses a non-linear model, the discussion about the data shifts away from the subject being analyzed to the methods being used. Xlstat enables the analyst to proceed parsimoniously thereby allowing analysts to focus on findings.

As a consultant in time series analysis, I want my clients to be able to draw valuable conclusions from the model, rather than muddle through the methodological details of the research process. That’s why I choose simple over sophisticated, that’s why I choose XLStat over any other statistical package. I was taught to model parsimoniously; I was always taught to select the most straightforward method for a model fitting. That parsimoniousness in modeling ought to be also applied in software development and use. The following article will show you how simple it is to model with XLStat. Whenever you can focus on the model rather than in the programming, you gain time, knowledge and expertise.

Here is the analysis:


Why do video game user numbers decline? The role of Critics/Reviewers in a tech-driven industry.

Before getting into the nitty-gritty of model fitting, let me provide a little bit of context about the data I am about to start analyzing. The database (Attached below) I assembled is an aggregation of 16720 rows of video game publishing details ranging from name, developer, genre, to sales and user ratings. I just took the events and aggregated them into a yearly frequency; therefore, I ended up with a time series dataset. The aggregation of the data into a time series data produced the following graph spanning from 1982 to 2017. The first insight after the visual inspection is that there is a break in the structure of the data. To confirm this, I run a Pettit’s test of homogeneity which suggests the rejection of the null hypothesis “H0: Data are homogeneous” and accepting the alternative “Ha: There is a date at which there is a change in the data.” Therefore, I split the dataset into two, 1980-1995 and 1996-2017.

Here is the thing. Data spanning from 1996-2017 might look like an inverted parabola.



The first insight from the descriptive graphs is that the Video Games Industry has seen a sharp decline in the number of users during the last decade. Thereby, the industry’s revenue has been substantially affected. I argue here that critics’ harsh criticism of new video game releases grange-causes largely the decline in the number of users, thus the industry decay. I conclude that one unit decrease in the critics’ video game judgment score may crowd 100% of the yearly gains in the change of “Number of users,” plus an additional 50% of that same gain.  Harsh criticism seems to discourage user growth despite positive contributions to growth evidenced in both change in “Video Game releases” and “User scores.” The latter two variables seem to contribute to growth almost 9/10 and ¾ of “Number of users.”

The first part of the post is this introduction. The second part outlines some stylized facts and assumptions. The third part describes the empirical data and evidence. The fourth part includes the specification of the most parsimonious model (OLS ARIMA 0,1,0) and the methodology used for the econometric analysis. The fifth section presents the findings. The sixth section focuses on the study of the disturbance term as evidence of internal consistency and reliability of the methodology applied by proving no violations to the core OLS assumptions. The seventh section presents the conclusion, limitation, and recommendations for future research.

Stylized facts and assumptions:

  1. There exists a structural change in the data around 1996-1999. Pettit’s test of homogeneity suggests the rejection of the null hypothesis which is “H0: Data are homogeneous”, for which the alternative is “Ha: There is a date at which there is a change in the data.” Therefore, I split the dataset into two, 1980-1995 and 1996-2017.
  2. Data for the year 2017 is incomplete. A quick google search can demonstrate 2017 numbers are outdated. Therefore, data for the year 2017 is excluded.
  3. The regression-ready time-series database ranges from 1996-2016. Regression-Ready

Database description / Empirical data:

  1. The regression-ready time-series database ranges from 1996-2016.
  2. The variables definitions are as follow:
    1. Y = I (1) variable Number of Users.
    2. X1 = I (1) variable Video Game Releases.
    3. X2 = I (1) variable User Score.
    4. X3 = I (1) variable Critics Score.
    5. X4 = I (1) variable Number of Critics.
  3. After transformation, variables become stationary time-series (KPSS tests fail to reject the null, which is “The series is stationary”) on the following:
    1. y = Relative Change in Number of Users.
    2. x1 = Relative Change in Video Game Releases.
    3. x2 = Relative Change in User Score Per Capita.
    4. x3 = Relative Change in Critics Score Per Capita.
    5. x4 = Relative Change in Number of Critics.
  4. There exists a unit root in the data. ADF (Dickey-Fuller) test fails to reject the null, which is “H0: There is a unit root for the series”. Variables are integrated of order 1, or I(1).
  5. There exists indication of at least one linear combination among the variables at the 5% significance level. Findings on the Cointegration test (Johansen) support grange-causality statements.


Model Specification: Ordinary Least Squares ARIMA (0,1,0).

The algebraic expression of the model is the following:



  1. Y = I (1) variable Number of Users.
  2. X1 = I (1) variable Video Game Releases.
  3. X2 = I (1) variable User Score.
  4. X3 = I (1) variable Critics Score.
  5. X4 = I (1) variable Number of Critics.
  6. Ԑ = White Noise.

Which is turn is the same as,



  1. y = Relative Change in Number of Users.
  2. x1 = Relative Change in Video Game Releases.
  3. x2 = Relative Change in User Score Per Capita.
  4. x3 = Relative Change in Critics Score Per Capita.
  5. Ԑ = White Noise.


Critics’ Score of new video game releases affects the independent variable “Number of Users” negatively. The extent to which this estimated effect curbs down “Number of users” is 150% in the change of “Number of Users.”  In other words, one unit decrease in the critics’ video game judgment score may wipe out the entire growth in “Number of Users” plus half of that growth in a given year.

Harsh criticism seems to discourage user growth despite positive contributions evidenced in both New Video Game releases and User scores.

New Video Game releases seem to contribute almost 9/10 in the change of “Number of users.”

“User scores” seems to contribute roughly ¾ of growth in the change of “Number of users.”

Residuals: model reliability and consistency.

Assumption 4: Constant variance of the disturbance term. Test of heteroskedasticity.

The very first concern with this kind of databases is the nonconstant variance of the error term. For the OLS ARIMA (0,1,0) model, I run a test of Heteroskedasticity of the residuals – White Test- for which the results a presented below. The null hypothesis is “Residuals are homoscedastic” while the alternative is “The residuals are heteroskedastic.” There is no evidence to reject the null.

Assumption 5: No autocorrelation between disturbances. Visual inspection of the partial autocorrelogram of the residuals.

The second concern with this kind of data is serial correlation. For the OLS ARIMA (0,1,0) model, I inspected the partial autocorrelogram plots for the residuals. There is no evidence to hesitate on serial correlation since none of the lags seem to have significant effect as shown in the graphs below.


Assumption 2: X values are independent of the error term.

The third concern with these analyses stems usually from the independence of the error term. For the OLS ARIMA (0,1,0) model, I run a KPSS test on the residuals – Test of Stationarity/ White Noise- for which the results a shown in the table below. The null hypothesis is “The series is stationary.” while the alternative is “The series is not stationary”. There is no evidence to reject the null.

Test of stationarity of the residuals:

Assumption 3: Zero mean value of the disturbance term.

Residuals shows no violation of the third core assumption Zero mean value of residuals.


Conclusions, limitations, and recommendations for future research.

1. Critics’ harsh criticism of new video game releases grange-causes largely the decline in the number of users in the sample.
2. One unit decrease in the critics’ video game judgment score may crowd out the entire yearly gain plus half of that annual gain.
3. Harsh criticism seems to discourage user growth despite positive contributions to growth evidenced in both change in “Video Game releases” and “User scores.”
4. Change in “Video Game releases” seems to contribute positively almost 9/10 of growth in “Number of users.”
5. Change in “User scores” seems to contribute ¾ over the change in “Number of users.”
6. The main limitation of the analysis is that it excludes the online (streaming) segment of the industry.
7. In the era of Internet 2.0, product reviews can drive down or up the user pool of technologic goods. Analyzing reviews, customer service conversation transcripts and other sorts of unstructured data arise as significant challenges for tech companies that look for managing the user experience with higher efficacy.


Click the link below to access the database:

Video Games Sales.

Rent Prices Stickiness and the Latest CPI Data.

Fear of increasing inflation in the U.S. appear to be the trigger behind the market volatility of previous weeks. Recent gains in hourly compensation to workers have had analysts measuring the effect of wages on inflation. In turn, analysts began pondering changes in Fed’s monetary policy due to the apparent overheating path of the economy; which is believed to be mostly led by low unemployment rate and tight labor markets. Thus, within the broad measure of inflation, the piece that will help to complete the puzzle comes from housing market data. Although the item “Shelter” in Consumer Price Index was among the biggest increases for the month of January 2018, for technical definitions, its estimation does weight down the effect of housing prices over the CPI. Despite the strong argument on BLS’ imputation of Owner-Occupied Equivalent Rent, I consider relevant to take a closer look at the Shelter component of the CPI from a different perspective. That is, despite the apparent farfetched correlation between housing prices and market rents, it is worth visualizing how such correlation might hypothetically work and affect inflation. The first step in doing so is identifying the likely magnitude of the effect of house prices over the estimates and calculation of rent prices.

Given what we know so far about rent prices stickiness, Shelter cost estimation, and interest rates, the challenge in completing the puzzle consists of understanding the linking element between housing prices (which are considered capital goods instead of consumables) and inflation. Such link can be traced by looking at the relation between home prices and the price-to-rent ratio. In bridging the conceptual differences between capital goods (not measured in CPI) and consumables (measured in CPI) the Bureau of Labor Statistics forged a proxy for the amount a homeowner ought to pay if the house was rented instead: Owner-Occupied Equivalent Rent. This proxy hides the market value of the house by simply equaling nearby rent prices without controlling by house quality. Perhaps, Real Estate professional can shed light onto this matter.

The Setting Rent Prices by Brokers.

It is often said that rental prices do not move in the same direction as housing prices. Indeed, in an interview with Real Estate professional Hamilton Rodrigues from, he claimed that there is not such a relationship. Nonetheless, when asked about how he sets prices for newly rent properties, his answer hints at a link between housing prices and rent prices. Mr. Rodrigues’ estimates for rent prices equal either the average or the median of at least five “comparable” properties within a mile radius. The key word in Mr. Rodrigues statement is comparable. As a broker, he knows that rent prices go up if the value of the house goes up because of house improvements and remodeling. Those home improvements represent a deal-breaker from the observed stickiness of rent prices.

For the same reason, when a house gets an overhaul, one may expect a bump in rent price. That bump must reflect in CPI and inflation. I took Zillow’s data for December of 2017 for the fifty U.S. States, and run a simple linear OLS model. By modeling the Log of Price-to-Rent Ratio Index as a dependent outcome of housing prices -I believe- it will be feasible to infer an evident spillover of increasing house prices over current inflation expectations. The two independent variables are the Logs of House Price Index bottom tier and the Logs of House Prices Index top tier. I assume here that when a house gets an overhaul, it will switch from the bottom tier data set to the top tier data set.

Results and Conclusion.

The result table below shows the beta coefficients are consistent with what one might expect: the top tier index has a more substantial impact in the variation of the Price-to-Rent variable (estimated β₂= .12, and standardized β=.24, versus β=.06 for the Bottom tier). Hence, I would infer that overhauls might signal the link through which houses as a capital goods could affect consumption indexes (CPI and CEI). Once one has figured the effect of house prices on inflation, the picture of rising inflation nowadays will get clearer and more precise. By this means predictions on Fed tightening and accommodating policies will become more evident as well.

Timorous evidence of “Contagious Effect” after Dow Sell-Off.

The stock market seems to be returning to the old normal of higher levels of volatility. I suggested on Tuesday that former Fed Chairman Alan Greenspan’s comments could have brought back volatility by triggering the Dow Sell-off on Monday, February 5th. As I wrote early in the week, I believe that we should observe some panic manifestation of economic anxiety because of Mr. Greenspan’s comments. In this Blog Post, I will show two Person Correlation tests that may allow inferences as to how investors’ fear has grown since Mr. Alan Greenspan stated that the American Economy has both a Stock Market Bubble and a Bond Market Bubble. The Pearson correlation tests show that the correlation has strengthened during the last seven days (First week of February 2018), suggesting there might be symptoms of Contagious effect.

I correlate two variables taken from two different time frames for the fifty states. First, the natural logs of the search term “Inflation”; and, the natural logs of the search term “VIX” (Volatility Index). Second, I correlate the natural logs of the same searches for both the last seven days period and the previous twelve months period. By looking at the corresponding coefficients, one may infer that the correlation increased its strengthen after Mr. Greenspan Statements -which reflects on the last seven days data. The primary goal of this analysis is to gather enough information so that analysts can conclude whether there is a Contagious Effect that could make things go worst. Understanding the dynamics of economic crisis starts by identifying the triggers of them.

What is Contagious Effect?

I should say that the best way to explain the Contagious Effect is by citing Paul Krugman’s quote of Robert Shiller (see also Narrative Economics), “when stocks crashed in 1987, the economist Robert Shiller carried out a real-time survey of investor motivations; it turned out that the crash was essentially a pure self-fulfilling panic. People weren’t selling because some news item caused them to revise their views about stock values; they sold because they saw that other people were selling”.

Thus, the correlation that would help infer a link between both expectations is inflation and the index of investors’ fear VIX. As I mentioned above, I took data from Google Trends that show interest in both terms and topics. Then I took the logs of the data to normalize all metrics. The Pearson correlation tests show that the correlation has strengthened during the last seven days, suggesting there might be symptoms of Contagious effect. The over the year Person correlation coefficient is approximate to .49, which is indicative of a medium positive correlation. The over the week Person correlation test showed a stronger correlation coefficient of .74 which is indicative of a stronger correlation. Both p-values support evidence to reject the null hypothesis.

The following is the results table:

February 1st – February 8th correlation (50 U.S. States):

February 2017 – February 2017 (50 U.S. States):

It is worth noting the sequence of the events that led to these series of blog posts. On January 31st, 2018 Alan Greenspan told Bloomberg News: “There are two bubbles: We have a stock market bubble, and we have a bond market bubble.” And, on February 5th, 2018, Dow Jones index falls 1,175 points after the trading day on Monday. As of the afternoon of Friday 9th, the Dow still struggle to recover, and it is considered to be in correction territory.

The Missing Part of the Dow Jones and Stock Market Sell-off Analysis.

The stock market keeps on sending signals of correction as the Dow Jones struggle to rebound from Monday’s 5th of February sell-off. Economic analysts began early in the week to point out to fear of high inflation due to an upward trend in workers compensation. News reports were mostly based on strong beliefs and arguments over the so-called Phillips Curve. However, instead of focusing exclusively on the weak relationship between wages and inflation, I suggest a brief look at the textbook explanation of the link between the stock market and economic activity. In this blog post, I frame the current market correction phenomenon under the arbitrage argument. If one were to consider the arbitrage argument to explain the correction, it would lead analysts to make firm conclusion not only over monetary policy but also over fiscal policy. The obvious conclusion is that Monetary Policy (Interest rates) will most likely aim at offsetting the effects of Fiscal Policy (Tax cuts).

The Arbitrage Argument (simplified):

Market sell-offs unveil a very simple investment dilemma: bonds versus stocks. In theory, investors will opt for the choice that yields higher returns. Firstly, investors look at returns yield by the interest rates, which means a safer way to make money through financial institutions. Secondly, investors look at returns yield by companies, in other words: profits. If such gains yield higher returns than saving rates, investors will choose to invest in the former. In both cases, agreements to repay the instrument will affect the contract and the financial gains, but that is the logic (Things can get messier if one includes the external sector).

The corresponding consequences are the market expectations about the economy. On the one hand, currently investors expect monetary policy to tighten. On top of jobs reports and previous announcement about rate increases, fears of inflation lead to the conclusion that the Federal Reserve Bank will most likely accelerate the pace in rising interest rates for its ten years treasury bond. Such policy will decrease the amount of circulating money, thereby making it harder for business to get funds because, following the arbitrage framework, investors will prefer to invest in safer treasury bonds. On the other hand, investors expect fiscal policy to have an impact on the economy as well. Recent corporate tax cut bolster the expectation for a higher level of profits from the stock market. Such policy may allure investors to believe that financing companies through Wall Street will yield higher returns than the bond market. Thus, sell-offs unveil the hidden expectations of investors in America.

Expectations and the Economy:

Once expectations seem formed and clear concerning declared preferences, (meaning either continuing the correction path for other indexes, or a rebound), investors begin evaluating monetary policy adjustments. They all know the Federal Reserve dual mandate as well as the Taylor Rule. The question is how the Federal Reserve would react to the market preferences based on other leading economic indicators. Will the Fed accommodate? Or will the Fed tighten? As of the first week of February, all events suggest that the Federal Reserve Bank will most likely tighten to offset and counterbalance the recent tax cut incentives and its corresponding spillovers.

Recent Narratives of Stock and Bond Bubbles.

On February 5th, 2018, Dow Jones index fell 1,175 points after the trading day. Four economic scenarios are being analyzed in the news as of the first week of February 2018. First, there are indeed both Stock Market and bonds Bubbles. Second, the Monday Dow’s selloff is just an anticipated correction move on the investor’s side. Third, the stock market returns to the old normal of higher levels of volatility. Fourth, Trump economic effect. I need not to cover the concerns on the US economy nowadays in this blog post. Hence, the analysis that I think is needed currently is the ruling out of a contagious effect from the narratives created around the Dow’s selloff on Monday. Indeed, I believe that such narrative, if any, can be traced back to former chairman Alan Greenspan’s comments when he stated on January 31st that America has both a bond market and stock market bubbles. By discarding the contagious effect in current narratives, I side with analysts who have asserted that the Dow’s fall was just an anticipated market correction.

Can economists claim there is some association between Alan Greenspan’s comments and the Monday fall of the Dow Jones? I may not have an answer for that question yet, but We can look into the dynamics of the phenomenon to better understand how narratives could either deter or foster an economic crisis in early 2018. If there is room for arguing that Mr. Greenspan’s comments triggered the Dow Selloff on Monday, I believe we should be observing some sort of panic or manifestation of economic anxiety. By looking at data from Google Trends, I spot on breakouts that may well be understood as “spreading” symptoms. In other words, if there is any effect of Mr. Greenspan’s comments on the Dow’s selloff on Monday, we should expect to see an increase in Google searches for two terms: first “Alan Greenspan”, and second “Stock Market Bubble.” The chart below shows google trends indexes for both terms. Little to nothing can be said about the graph after a visual inspection of the data. It is hard to believe that there are narratives of economic crisis fast-spreading, nor have Mr. Greenspan’s comments had any effect on the Dow’s sell-off.

How did things occur?

Economists are lagging on the study of narratives, hence the limited set of appropriate analytics tools. Robert Shiller wrote early in 2017 that “we cannot easily prove that any association between changing narratives and economic outcomes is not all reverse causality, from outcomes to the narratives,” which is certainly accurate whenever time has passed as empirical evidence become obscure. However, on February 1st of 2018 mainstream media reported extensively a couple of statements made by Alan Greenspan about bubbles. In the following two days, several market indexes closed with relatively big loses. In detail, the events occurred as follows:

  1. On January 31st, 2018 Alan Greenspan told Bloomberg News: “There are two bubbles: We have a stock market bubble, and we have a bond market bubble.”
  2. On February 5th, 2018, Dow Jones index falls 1,175 point after the trading day on Monday.

Whenever these events happen, we all rush to think about Robert Shiller. As Paul Krugman cited Shiller today February 6th, 2018, “when stocks crashed in 1987, the economist Robert Shiller carried out a real-time survey of investor motivations; it turned out that the crash was essentially a pure self-fulfilling panic. People weren’t selling because some news item caused them to revise their views about stock values; they sold because they saw that other people were selling”. In other words, Robert Shiller’s work on Narrative Economics is meant for these types of conjectures. Narratives of economic crisis play a critical role in dispersing fear whenever economic bubbles are about to burst. One way to gauge the extent to which such a contagious effect occurs is by looking at google trend search levels.



No signs of fast-spreading economic crisis narratives:

Despite the ample airtime coverage, there is little to none evidence of a market crash and economic crisis. In the wave of fast pace breaking news announcing crisis and linking them to political personalities, markets seem just to be having an expected correction after an extended period of gains. The best way to conclude such correction is by looking at the firm numbers reported lately on jobs markets as well as to investigate the collective reaction to fear and expectations. Thus, four economic scenarios are being analyzed as of the first week of February. First, there are Stock Market and bonds Bubbles. Second, the Monday Dow’s selloff is just an anticipated correction move on the investor’s side. Third, the market returns to the old normal of higher levels of volatility. Fourth, Trump effect. None of the scenarios seem plausible to me. First, the selloff appears not to have dug into the investors and people’s minds, thereby avoiding the contagious effect. Second, despite the unreliability of winter economic statistics, jobs reports on January 2018 seem optimistic (I think they will revise those number low). Third, claiming volatility is back to the stock market is like claiming Trump is back into controversy. Therefore, the only option left to explain Monday’s selloff is the argument of a market correction.

The overuse of the word “Strong” in economic news.

The US economy added 228,000 new jobs in November of 2017 and analysts rush to assess the state of the economy as “STRONG.” Although the job reports are indeed good indicators of the performance of the US economy, one should not simplify the job report as the snapshot of the economy that allows for those “strong” conclusions by and in itself. In this post, I show that despite the existence of a cointegration vector between unemployment rate data and the word-count of the word “Strong” in the Beige Book, journalists indeed overuse the word “Strong” in headlines. Although interpreting cointegration as elasticity goes beyond the scope of this post, I think that by looking at the cointegration relation it is safe to conclude that the current word count does not reflect the “strong” picture showed by the media, but somewhat more moderate economic conditions.

To start, let me go back to the first week of December of 2017. Back then, news outlets had headlines abusing the word “strong.” Some examples came from major newspapers in the US such as the New York Times, Reuters, CNN and the Washington Post. The following excerpts are just a sample of the narrative seen those days:

“The American economy continues its strong performance” (CNN Money).

“The economy’s vital signs are stronger than they have been in years” (NY Times).

“Strong US job growth in November bolsters economy’s outlook” (Reuters).

“These are really strong numbers, which is pretty exciting…” (Washington Post).

Getting to know what is happening in the economy challenges economists’ wisdom. Researchers are constrained by epistemological limits of data and reality, and so are journalists. To understand economic conditions, researchers utilize both quantitative and qualitative data while journalists focus on qualitative most of the times. Regarding qualitative data, the Beige Book collects anecdotes and qualitative assessments from the twelve regional banks of the Federal Reserve system that may help news outlets to gauge news statements and headlines. The Fed studies business leaders, bank employees, among other economic agents to gather information about the current conditions of the US economy. As a Researcher, I counted the number of times the word “strong” shows up in the Beige Book starting back in 2006. The results are plotted as follow:

If I were going to identify a correlation between the word count of “strong” and its relation to the unemployment rate, it would be very hard to do so by plotting the two lines simultaneously. Most of the times, when simple correlations are plotted, the dots show any relation between the two variables. However, in this case, cointegration goes a little deeper into the explanation. The graph below shows how the logs of both variables behave contemporarily over time. They both decrease during the Great Recession as well as they increase right after the crisis started to end. However, more recently both variables began to divert from each other, which makes it difficult to interpret, at least in the short run.

Qualitative data hold some clues in this case. Indeed, the plot shows a decreasing trend in the use of the word within the Beige Book. In other terms, as journalists increase its use in headlines and news articles, economists at the Federal Reserve Bank decrease the use of the word “strong”. If I were going to state causality from one variable to the other, first I would link the word “strong” to some optimism for expected economic outcomes. Thereby, one should expect a decrease in the unemployment rate as the use of the word “strong” increases. This is a classic Keynesian perspective of the unemployment rate. Such relation of causality might constitute the cointegration equation that the cointegration test identifies in the output tables below. In other words, the more you read “strong,” the more employers hire. By running a cointegration test, I can show that both variables are cointegrated over time. That is, there is a long-term relationship between both variables (both are I(1)). The cointegration test shows that at least there exists one linear combination of the two variables over time.

The difficulty with the overuse of the word nowadays is that the word is not being used by economists in the Federal Reserve at the same pace as journalist economists do. In fact, the word-count has decreased drastically for the last two years from its peak since 2015. Such mismatch may create false expectations about economic growth, sales and economic performance that may lead to economic crisis.

Who should restauranteurs trust with a manager code or swipe ID card?

Who would restauranteurs trust with the manager code or swipe card when they are away?
Making such decision seems natural for many businessmen and women. However, the restaurant industry possesses a singular fixture that makes such a decision very difficult. The sector shows the highest turnover rate in the nation. That means people come and go twice as much as the national average among all industries. At this rate, everyone is stranger all the time. Then, if you only get to know people for a short time, what criterion would you use to decide who to give a POS swipe card? The answer is data.

My client in the NYC metro area faced that dilemma recently. Overwhelmed with purveyors, payroll, and bills, he needed to delegate some responsibilities to ameliorate the burden of running his restaurant. When he went through his staff list, he realized all of them were nice, kind and professional to some degree. It was hard for him to pinpoint the right person and be sure that he or she was the correct one. At the time, had been helping him and the chef with menu development when the dilemma was brought to our attention. The owner had trusted other employees in the past by using his mere intuition. Consultants at did not want to contest his beliefs, yet, we offered a different approach to decision making: we asked him, why don’t you look at your POS data. As he said his decision comes down to trusts, we noted trust builds upon performance and evidence.

Right after that conversation, we downloaded data comprising servers’ transactions from the POS. We knew where we were heading given that our Server Performance program helps clients to identify who performs, and who does not perform precisely. The owner wanted to give the swipe card to the most average user. We looked at the Discount as Percentage of Sale metric and came up with a graphic description for him to choose. The first thing he noticed was that one of his previous cardholders had a high record discounting food, Benito. The owner stressed that Benito was a nice, generous and hardworking guy. We did not disagree as to Benito’s talents; however, we believe that Benito can be generous with his own money, not the restaurant’s resources.

Once he got disappointed with Benito’s performance, our restauranteur was presented with two choices, either he would give back the swipe card to Benito and oversee him, or he would choose among the employees that were around the average. He agreed one more time to make a data-driven and fair choice.

After graphing the data, the selection process narrowed the pool to four great servers. All of them looked very similar in both personality and job performance. The owner’s next suggestion was to flip a coin and see who wins. Instead, we proposed a more orthodox approach to decision making: the one sample student t-test.

We told the restauranteur, the criterion will be the statistical significance of their discount record when compared with the arithmetic average. The score closer to the arithmetic mean would win the card. We shortlisted Heath, Borgan, Carlos, and Andres as they stood out the rest of the staff who looked either too “generous” or too “frugal.” Among those four servers whose discounts scored within the 7% range, we run the t-test to see if there were any significant differences from the staff average. Heath’s score was not statistically different than the staff average. When compared her score with the mean, her p-value was higher (0.064) than the .05 threshold we set for our significance level. Thus, Heath was the first eliminated from the shortlist. Borgan was next, and his p-value was 0.910. Borgan was within the range and classified to the next round. So was Carlo with a p-value of 0.770. Finally, Andres got a p-value of 0.143.

At the end of the day, there was no difference among the shortlisted candidates. The next step relaxed the threshold to .07 significance level. Following this more relaxed criterion, Heath’s p-value disqualified herself, and we could cut the list down to three finalists. With three shortlisted candidates, the restaurant owner was able to make his first data-driven decision.

Raising economic expectations with the “after-tax” reckon: President Trump’s corporate tax cut plan.

The series of documents published by the White House Council of Economic Advisers indicate that President Donald Trump’s Tax Reform will end up being his economic growth policy. The most persuasive pitch behind the corporate tax cut is that lowering taxes to corporations will foster economic investment thereby economic growth. Further, the political rhetoric refers to GDP growth estimates of a tax-cut-boosted 3 to 5 percent growth in the long run. In supporting the corporate tax cut, the White House Council of Economic Advisers presented both a theoretical framework and some empirical evidence of the effects of tax cuts on economic growth. Even though the evidence presented by the CEA is sound and right, after reading the document, any analyst would promptly notice that the story is incomplete and biased. In this blog post, I will briefly point to the incompleteness of White House CEA’s tax cut policy justification. Then, I will show that the alleged “substantial” empirical evidence meant to support the corporate tax-cut policy is insufficient as well as flawed. In third place, I will make some remarks on the relevance of the tax-cut as a fiscal policy tool in balance to the current limitation of monetary policy. Finally, I conclude that despite the short-term benefits of the corporate tax cut, such benefits are temporal as the new normal rate settles, and at the end of the day, given that tax policy cannot be optimized, setting expectations from the administration is a policy waste of time.

The very first policy instance that CEA stresses in its document is the fact that corporate tax cut does affect economic growth. Following CEA’s rationale of current economic conditions, the main obstacle to GDP growth rates above 2 percent is low rates of private fixed investment. CEA infers implicitly that the user cost of capital far exceeds profit rates. In other words, profit rates do not add up enough to cover for depreciation and wear off capital investments. Thus, if private investment depends on expected profit as well as depreciation, simply put I_t=I(π_t/(r_t+ δ)) where the numerator is profit, and the denominator is the user cost of capital (Real Interest rate plus depreciation), the quickest strategy to alter the equation is by increasing profit through lowering on fixed cost such as taxes. CEA’s rationale assumes correctly that no one can control depreciation of capital goods, and wrongly thinks that no one (including the Federal Reserve which faces serious limitations) can control real interest rate, currently.

CEA fetched some data from the Bureau of Economic Analysis to demonstrate that private sector Investment is showing concerning signals of exhaustion. The Council sees a “substantial” weakness in equipment and structures investments. More precisely, CEA remarks that both equipment and structure investment have declined since 2014. Indeed, both variables show a decline in levels of 2 and 4 percent respectively. However, and although CEA considers such decline worrisome, those decreases seem not extraordinary for the variables to develop truly policy concerns. In fairness, those variables have shown sharper decreases in the past. The adjective “substantial,” which justifies the corporate tax cut proposal, is fundamentally flawed.

The problem with the proposal is that “substantial” does not imply “significant” statistically speaking. In fact, when put in econometric perspective, one of those two declines does not appear to be statistically different from the mean. In other words, the two declines look perfectly as a natural variation within the normal business cycle. A simple one sample t-test will show the incorrectness of the “substantial” reading of the data. A negative .023 change (p=.062), in Private fixed investment in equipment (Non-Residential) from 2015 to 2016, is just on the verge of normal business (M=.027, SD=.097), when alpha level is set to .05. On the other hand, a negative .043 change (p=.013) in Private fixed investment for nonresidential structures stands out of the average change (M=.043, SD= .12), but still, it is too early to claim there is a substantial deacceleration of investments.

Thus, if the empirical data on investment do not support a change in tax policy, then the CEA tries to maneuver growth by policy expectations. Their statements and publications unveil the desire to influence agents’ economic behavior by reckoning with the “after-tax” condition of expected profit calculations. Naturally, the economic benefits of corporate tax cuts will run only in the short term as the new rate becomes the new normal. Therefore, the benefit of nominally increasing profits will just boost profit expectations in the short term while increasing the deficit in the long run. Ultimately, the problem of using tax reform as growth policy is that tax rates cannot be controlled for optimization. Unlike interest rate, for numerous reasons, governments do not utilize tax policy as a tool for influencing either markets or economic agents.