Timorous evidence of “Contagious Effect” after Dow Sell-Off.

The stock market seems to be returning to the old normal of higher levels of volatility. I suggested on Tuesday that former Fed Chairman Alan Greenspan’s comments could have brought back volatility by triggering the Dow Sell-off on Monday, February 5th. As I wrote early in the week, I believe that we should observe some panic manifestation of economic anxiety because of Mr. Greenspan’s comments. In this Blog Post, I will show two Person Correlation tests that may allow inferences as to how investors’ fear has grown since Mr. Alan Greenspan stated that the American Economy has both a Stock Market Bubble and a Bond Market Bubble. The Pearson correlation tests show that the correlation has strengthened during the last seven days (First week of February 2018), suggesting there might be symptoms of Contagious effect.

I correlate two variables taken from two different time frames for the fifty states. First, the natural logs of the search term “Inflation”; and, the natural logs of the search term “VIX” (Volatility Index). Second, I correlate the natural logs of the same searches for both the last seven days period and the previous twelve months period. By looking at the corresponding coefficients, one may infer that the correlation increased its strengthen after Mr. Greenspan Statements -which reflects on the last seven days data. The primary goal of this analysis is to gather enough information so that analysts can conclude whether there is a Contagious Effect that could make things go worst. Understanding the dynamics of economic crisis starts by identifying the triggers of them.

What is Contagious Effect?

I should say that the best way to explain the Contagious Effect is by citing Paul Krugman’s quote of Robert Shiller (see also Narrative Economics), “when stocks crashed in 1987, the economist Robert Shiller carried out a real-time survey of investor motivations; it turned out that the crash was essentially a pure self-fulfilling panic. People weren’t selling because some news item caused them to revise their views about stock values; they sold because they saw that other people were selling”.

Thus, the correlation that would help infer a link between both expectations is inflation and the index of investors’ fear VIX. As I mentioned above, I took data from Google Trends that show interest in both terms and topics. Then I took the logs of the data to normalize all metrics. The Pearson correlation tests show that the correlation has strengthened during the last seven days, suggesting there might be symptoms of Contagious effect. The over the year Person correlation coefficient is approximate to .49, which is indicative of a medium positive correlation. The over the week Person correlation test showed a stronger correlation coefficient of .74 which is indicative of a stronger correlation. Both p-values support evidence to reject the null hypothesis.

The following is the results table:

February 1st – February 8th correlation (50 U.S. States):

February 2017 – February 2017 (50 U.S. States):

It is worth noting the sequence of the events that led to these series of blog posts. On January 31st, 2018 Alan Greenspan told Bloomberg News: “There are two bubbles: We have a stock market bubble, and we have a bond market bubble.” And, on February 5th, 2018, Dow Jones index falls 1,175 points after the trading day on Monday. As of the afternoon of Friday 9th, the Dow still struggle to recover, and it is considered to be in correction territory.

The Missing Part of the Dow Jones and Stock Market Sell-off Analysis.

The stock market keeps on sending signals of correction as the Dow Jones struggle to rebound from Monday’s 5th of February sell-off. Economic analysts began early in the week to point out to fear of high inflation due to an upward trend in workers compensation. News reports were mostly based on strong beliefs and arguments over the so-called Phillips Curve. However, instead of focusing exclusively on the weak relationship between wages and inflation, I suggest a brief look at the textbook explanation of the link between the stock market and economic activity. In this blog post, I frame the current market correction phenomenon under the arbitrage argument. If one were to consider the arbitrage argument to explain the correction, it would lead analysts to make firm conclusion not only over monetary policy but also over fiscal policy. The obvious conclusion is that Monetary Policy (Interest rates) will most likely aim at offsetting the effects of Fiscal Policy (Tax cuts).

The Arbitrage Argument (simplified):

Market sell-offs unveil a very simple investment dilemma: bonds versus stocks. In theory, investors will opt for the choice that yields higher returns. Firstly, investors look at returns yield by the interest rates, which means a safer way to make money through financial institutions. Secondly, investors look at returns yield by companies, in other words: profits. If such gains yield higher returns than saving rates, investors will choose to invest in the former. In both cases, agreements to repay the instrument will affect the contract and the financial gains, but that is the logic (Things can get messier if one includes the external sector).

The corresponding consequences are the market expectations about the economy. On the one hand, currently investors expect monetary policy to tighten. On top of jobs reports and previous announcement about rate increases, fears of inflation lead to the conclusion that the Federal Reserve Bank will most likely accelerate the pace in rising interest rates for its ten years treasury bond. Such policy will decrease the amount of circulating money, thereby making it harder for business to get funds because, following the arbitrage framework, investors will prefer to invest in safer treasury bonds. On the other hand, investors expect fiscal policy to have an impact on the economy as well. Recent corporate tax cut bolster the expectation for a higher level of profits from the stock market. Such policy may allure investors to believe that financing companies through Wall Street will yield higher returns than the bond market. Thus, sell-offs unveil the hidden expectations of investors in America.

Expectations and the Economy:

Once expectations seem formed and clear concerning declared preferences, (meaning either continuing the correction path for other indexes, or a rebound), investors begin evaluating monetary policy adjustments. They all know the Federal Reserve dual mandate as well as the Taylor Rule. The question is how the Federal Reserve would react to the market preferences based on other leading economic indicators. Will the Fed accommodate? Or will the Fed tighten? As of the first week of February, all events suggest that the Federal Reserve Bank will most likely tighten to offset and counterbalance the recent tax cut incentives and its corresponding spillovers.

Raising economic expectations with the “after-tax” reckon: President Trump’s corporate tax cut plan.

The series of documents published by the White House Council of Economic Advisers indicate that President Donald Trump’s Tax Reform will end up being his economic growth policy. The most persuasive pitch behind the corporate tax cut is that lowering taxes to corporations will foster economic investment thereby economic growth. Further, the political rhetoric refers to GDP growth estimates of a tax-cut-boosted 3 to 5 percent growth in the long run. In supporting the corporate tax cut, the White House Council of Economic Advisers presented both a theoretical framework and some empirical evidence of the effects of tax cuts on economic growth. Even though the evidence presented by the CEA is sound and right, after reading the document, any analyst would promptly notice that the story is incomplete and biased. In this blog post, I will briefly point to the incompleteness of White House CEA’s tax cut policy justification. Then, I will show that the alleged “substantial” empirical evidence meant to support the corporate tax-cut policy is insufficient as well as flawed. In third place, I will make some remarks on the relevance of the tax-cut as a fiscal policy tool in balance to the current limitation of monetary policy. Finally, I conclude that despite the short-term benefits of the corporate tax cut, such benefits are temporal as the new normal rate settles, and at the end of the day, given that tax policy cannot be optimized, setting expectations from the administration is a policy waste of time.

The very first policy instance that CEA stresses in its document is the fact that corporate tax cut does affect economic growth. Following CEA’s rationale of current economic conditions, the main obstacle to GDP growth rates above 2 percent is low rates of private fixed investment. CEA infers implicitly that the user cost of capital far exceeds profit rates. In other words, profit rates do not add up enough to cover for depreciation and wear off capital investments. Thus, if private investment depends on expected profit as well as depreciation, simply put I_t=I(π_t/(r_t+ δ)) where the numerator is profit, and the denominator is the user cost of capital (Real Interest rate plus depreciation), the quickest strategy to alter the equation is by increasing profit through lowering on fixed cost such as taxes. CEA’s rationale assumes correctly that no one can control depreciation of capital goods, and wrongly thinks that no one (including the Federal Reserve which faces serious limitations) can control real interest rate, currently.

CEA fetched some data from the Bureau of Economic Analysis to demonstrate that private sector Investment is showing concerning signals of exhaustion. The Council sees a “substantial” weakness in equipment and structures investments. More precisely, CEA remarks that both equipment and structure investment have declined since 2014. Indeed, both variables show a decline in levels of 2 and 4 percent respectively. However, and although CEA considers such decline worrisome, those decreases seem not extraordinary for the variables to develop truly policy concerns. In fairness, those variables have shown sharper decreases in the past. The adjective “substantial,” which justifies the corporate tax cut proposal, is fundamentally flawed.

The problem with the proposal is that “substantial” does not imply “significant” statistically speaking. In fact, when put in econometric perspective, one of those two declines does not appear to be statistically different from the mean. In other words, the two declines look perfectly as a natural variation within the normal business cycle. A simple one sample t-test will show the incorrectness of the “substantial” reading of the data. A negative .023 change (p=.062), in Private fixed investment in equipment (Non-Residential) from 2015 to 2016, is just on the verge of normal business (M=.027, SD=.097), when alpha level is set to .05. On the other hand, a negative .043 change (p=.013) in Private fixed investment for nonresidential structures stands out of the average change (M=.043, SD= .12), but still, it is too early to claim there is a substantial deacceleration of investments.

Thus, if the empirical data on investment do not support a change in tax policy, then the CEA tries to maneuver growth by policy expectations. Their statements and publications unveil the desire to influence agents’ economic behavior by reckoning with the “after-tax” condition of expected profit calculations. Naturally, the economic benefits of corporate tax cuts will run only in the short term as the new rate becomes the new normal. Therefore, the benefit of nominally increasing profits will just boost profit expectations in the short term while increasing the deficit in the long run. Ultimately, the problem of using tax reform as growth policy is that tax rates cannot be controlled for optimization. Unlike interest rate, for numerous reasons, governments do not utilize tax policy as a tool for influencing either markets or economic agents.

 

Implications of “Regression Fishing” over Cogent Modeling.

One of the most recurrent questions in quantitative research refers to on how to assess and rank the relevance of variables included in multiple regression models. This type of uncertainty arises most of the time when researches prioritize data mining over well-thought theories. Recently, a contact of mine in social media formulated the following question: “Does anyone know of a way of demonstrating the importance of a variable in a regression model, apart from using the standard regression coefficient? Has anyone had any experience in using Johnson’s epsilon or alternatives to solve this issue? Any help would be greatly appreciated, thank you in advance for your help”. In this post, I would like to share the answer I offered to him by stressing the fact that what he wanted was to justify the inclusion of a given variable further than its weighted effect on the dependent variable. In the context of science and research, I pointed out to the need of modeling appropriately over the mere “p-hacking” or questionable practices of “regression fishing”.

[wp-paywall]

What I think his concern was all about and pertained to is modeling in general. If I am right, then, the way researchers should tackle such a challenge is by establishing the relative relevance of a regressor further than the coefficients’ absolute values, which requires a combination of intuition and just a bit of data mining. Thus, I advised my LinkedIn contact by suggesting how he would have almost to gauge the appropriateness of the variables by comparing them against themselves, and analyze them on their own. The easiest way to proceed was scrutinizing the variables independently and then jointly. Therefore, assessing the soundness of each variable is the first procedure I suggested him to go through.

In other words, for each of the variables I recommended to check the following:

First, data availability and the degree of measurement error;

Second, make sure every variable is consistent with your thinking –your theory;

Third, check the core assumptions;

Fourth, try to include rival models that explain the same your model is explaining.

Now, for whatever reason all variables seemed appropriate to the researcher. He did check out the standards for including variables, and everything looked good. In addition, he believed that his model was sound and cogent regarding the theory he surveyed at the moment. So, I suggested raising the bar for decanting the model by analyzing the variables in the context of the model. Here is where the second step begins by starting a so-called post-mortem analysis.

Post-mortem analysis meant that after running as much regression as he could we would start a variable scrutiny for either specification errors or measurement errors or both. Given that specification errors were present in the model, I suggested a test of nested hypothesis, which is the same as saying that the model omitted relevant variables (misspecification error), or added an irrelevant variable (overfitting error). In this case the modeling error was the latter.

The bottom line, in this case, was that regardless of the test my client decided to run, the critical issue will always be to track and analyze the nuances in the error term of the competing models.

Unemployment √. Inflation √. So… what is the Fed worrying about?

Although the Federal Open Market Committee (hereafter FOMC) March’s meeting on monetary policy focused on what apparently was a disagreement over the timing for modifying the Federal Bonds interest rates, the minutes indicate that the disagreement is not only on timing issues but also on exchange rate challenges. Not only does the Fed struggle with when the best moment is to raise the rate, but also it grapples with the extent to which its policy decisions can reach. The FOMC current economic outlook and their consensus on the state of the U.S. economy have no room for doubts on domestic issues as it does for uncertainties on foreign markets. Thus, the minutes of the meeting held in Washington on March 15th – 16th 2016 unveils an understated intent for influencing global markets by stabilizing the U.S. currency. On one hand, both objectives of monetary policy seem accomplished regarding labor markets and inflation. On the other, the global deceleration is the only factor that concerns the Fed since it could have adverse spillovers on America. The most recent monetary policy meeting reveals a subtle attempt to stabilize the U.S dollar exchange rate at some level, thereby favoring American exports.

Unemployment rate √. Inflation rate √.

The institutional objective of the Federal Reserve Bank seems uncompromised these days. Economic activity is picking up overall, the labor market is at desired levels, and inflation seems somewhat under control. The confidence economists have right now starts by the U.S. Household Sector. Household spending looks healthy, and officials at the Bank are confident such spending will keep on buoying labor markets. As stated in the minutes, “strong expansion of household demand could result in rapid employment growth and overly tight resource utilization, particularly if productivity gains remained sluggish” (Page 6). Indeed, the labor market is showing strong gains in employment level which has made the unemployment rate to decrease down to 5.0 percent by the end of the first quarter of 2016.

Furthermore, FOMC understands the high levels of consumer confidence as a warranty for a sustained path for growth. The committee also pointed out that low gasoline prices are stimulating not only higher level of consumption but also motor vehicles sales. They know of the excellent situation of the relative high household wealth to income ratio. Otherwise, members of the Committee recognize that regions affected by oil prices are starting to struggle while business fixed investment shows signs of weakening. Nevertheless, the consensus among members of the Committee reflects an overall optimism in the resilience of the economy rather than a worrisome situation about the outlook.

By Catherine De Las Salas

By Catherine De Las Salas

The fear comes from overseas.

The transcripts, which were released on April 6th, 2016, show that  Fed officials the concerns stem from global economic and financial developments. The FOMC “saw foreign economic growth as likely to run at a somewhat slower pace than previously expected, a development that probably would further restrain growth in U.S. exports and tend to damp overall aggregate demand” (Pag. 8). They also flagged warnings on wider credit spreads on riskier corporate bonds. In sum, policymakers at the FOMC interpret the current lackluster global situation as a threat to the economic growth of the United States.

To discard choices.

Therefore, the fact that those two conditions overlap has made the Committee anxious to intervene in an arena that perhaps could be out of its reach. By keeping unmoved the interest rate of the federal bonds during March -and perhaps doing so until June-, the FOMC does not aim at stimulating investment domestically. Nor does it at controlling inflation. In fact, the policy choice reveals a subtle attempt for keeping the U.S dollar exchange rate stable overseas, thereby favoring American exports. The latter statement could be inferred from the minutes based on the Committee’s consensus on the state of the economy. First, U.S. labor markets are strong, and the Fed considers that the actual unemployment rate corresponds to the longer-run estimated rate. Second, inflation –either headline or core- are projected and expected to be on target. And third, domestic conditions are in general satisfactory. The only factor that remains risky is the rest of the world. Therefore, whatever action they took last March meeting could be interpreted as intended for influencing global markets.

 

How and when to make “policy recommendations”.

The ultimate goal of a policy analyst is to make “policy recommendations”. But, when is it precise to make such bold type of statements without looking inexperienced? The following article looks at the requirements for reliable and sound policy recommendations. The focus here is not on how to write them, but on how to technically support them. The best policy recommendations are those that derive from identifying a research problem, pinpoint proxies, quantify magnitudes, and optimize responses under a set of constraints. In other words, good policy recommendations translate into the proper measurement, research, and interpretation.

To identify and describe the policy issue:

To identify and describe a policy issue is a matter that most of the analysts do well. Examining a hypothesis through several qualitative methods is what most of us do. So, let us start by saying that our policy issue concerns the exports of American manufactured products within a geographical scope and a targeted timeframe. In formulating the research, the analyst would identify a set of factors that have affected directly or indirectly manufacturing production in the U.S. from 2009 to 2015. Let us also say that, after a rigorous literature review as well as three rounds of focus groups with stakeholders, our analyst came up with the conclusion that the primary factor affecting negatively U.S. manufacturing exports is China’s currency. Although everyone knows that the issue of U.S. manufacturing is way more complicated than just the variation of China’s currency, let us stop there for sake of our discussion. Thus far, no policy recommendations can be made, unless our analyst wants to embarrass his work team.

Proceed with the identification of metrics:

Once our analyst has found and defined the research problem pertaining the policy issue, he might wish to proceed with the identification of metrics that best represent the problem. That is a measure for manufacturing production variation and a measure of China’s currency variation. Manufacturing can be captured by “Total value of Shipments” statistics from the Census Bureau. China’s currency can be taken from the Federal Reserve Bank in the form of US dollar – Renminbi exchange rate. Again, our analyst may come up with many metrics and measures for capturing what he thinks best represent the problem’s variables. But for now, those two variables are the best possible representation of the problem. Thus far, no policy recommendations can be made, unless our analyst wants to ridicule himself.

Advance with the configuration of the database:

After exhausting all possible proxies, our analyst may advance with the configuration of the dataset. Whether the dataset is a time series or cross-sectional, the configuration of the dataset must facilitate no violation of the seven assumptions of regression analysis. At this stage of the research, our analyst could specify a simple statistical or econometric model. In our example, such a model is a simple linear regression of the form Y=β1-β2X+Ԑ. Thus far, no policy recommendations can be made, unless our analyst wants to lose his job.

Run regressions:

Since data is ready to go, our policy expert may start running data summaries and main statistics. Then, he would continue to run regressions in whatever statistical package he prefers. In our example, he would regress U.S. value of shipment of non-defense capital goods (dependent variable) on U.S.-Renminbi exchange rate (independent variable). Given that our case happens to be a time series, it is worth noting that the series must be stationary. Hence, the measures had to be transformed into stationary metrics by taking the first difference and then the percentage change over the months. In other words, the model is a random walk with a drift. Just in case the reader wants to check, below are the graphs and a brief summary statistics. Thus far, no policy recommendations can be made, but out analyst is getting closer and closer.

Summary ARIMA Model.

Summary ARIMA Model.

Compare results with other researchers’ estimates:

Now that our analyst has estimated all the parameters, he would rush back into the literature review and compare those results with other researchers’ estimates. If everything makes sense within the bounds set by both the lit review and his regressions, our policy professional may start using the model for control and policy purposes. At this point of the research, the analyst is equipped with enough knowledge about the phenomena, and, therefore, he could start making policy recommendations.

Finally, to complete our policy expert’s story, let us assume that China’s government is interested in growing its industries in the sector in a horizon of two years. Then, what could our analyst’s policy recommendation be? Given that our analyst knows the variables and the parameters of the policy issue, he could draft now a strategy aimed at altering those parameters. In my example, I know that a unit change increase in the U.S. Dollar – Renminbi exchange rate could generate a 75% decrease in the monthly change in value of shipments – Non-defense Capital Goods. Despite the low level of statistical significance, let us assume the model works just fine for policy purposes.

Correlograms and Summary Statistics.

Correlograms and Summary Statistics.

31 Data Sources, Surveys and Metrics for Doing Research on U.S. Labor Market.

If your research project encompasses facts on U.S. Labor Market, here are some useful data sources and metrics that might illuminate insights for your research. Although there might be some discrepancies between what you narrowed as your research question and the data sources showed below, chances are you will find a set of metrics that might capture a good proxy for your research topic.

Look through the list and then identify a possible match between your research question and the data source:

1. Employment and Unemployment (Regional, County, National and Metropolitan Area). Data source: U.S. Bureau of Labor Statistics.
2. Unemployment Insurance Claimants. Data source: U.S. Department of Labor.
3. Real Earnings. Data source: U.S. Bureau of Labor Statistics.
4. Labor Force Characteristics of Foreign Born Workers. Data source: U.S. Bureau of Labor Statistics.
5. Job Opening and Labor Turn Over. Data source: U.S. Bureau of Labor Statistics.
6. Employment Situation. Data source: U.S. Bureau of Labor Statistics.
7. ADP Employment. Data source: ADP.
8. Productivity and Cost. Data source: U.S. Bureau of Labor Statistics.
9. Employment Cost. Data source: U.S. Bureau of Labor Statistics.
10. Personal Income and Outlays. Data source: U.S. Bureau of Economic Analysis.
11. Business Employment Dynamics. Data source: U.S. Bureau of Labor Statistics.
12. Employment Characteristics of Families. Data source: U.S. Bureau of Labor Statistics.
13. Usual Weekly Earnings of Wages and Salaries of Workers. Data source: U.S. Bureau of Labor Statistics.
14. College Enrollment and Work Activity of High School Graduates. Data source: U.S. Bureau of Labor Statistics.
15. Number of Jobs, labor market experience (Longitudinal Survey). Data source: Bureau of Labor Statistics.
16. Occupational Employment and Wages. Data source: U.S. Bureau of Labor Statistics.
17. State and Local Personal Income and Real Personal Income. Data source: U.S. Bureau of Economic Research.
18. Employment Situation of the Veterans. Data source: U.S. Bureau of Labor Statistics.
19. Employer Cost for Employee Compensation. Data source: U.S. Bureau of Labor Statistics.
20. Volunteering in the U.S. Data source: U.S. Bureau of Labor Statistics.
21. Major Work Stoppages. Data source: U.S. Bureau of Labor Statistics.
22. Mass Layoffs. Data source: U.S. Bureau of Labor Statistics.
23. Union Members. Data source: U.S. Bureau of Labor Statistics.
24. Employee tenure (2014). Data source: Bureau of Labor Statistics.
25. Consumer Expenditure (2013). Data source: U.S. Bureau of Labor Statistics.
26. Summer Youth Labor Force. Data source: U.S. Bureau of Labor Statistics.
27. Employee Benefits (Private sector). Data source: U.S. Bureau of Labor Statistics.
28. Persons with Disabilities Characteristics. Data source: U.S. Bureau of Labor Statistics.
29. Employment Projections 2012-2022. Data source: U.S. Bureau of Labor Statistics.
30. Income of the 55 and older. Data source: U.S. Social Security Administration.
31. Women in Labor Force (2012). Data source: U.S. Bureau of Labor Statistics.

We can support your research:

Econometricus.com helps Researches in understanding the economic situation of specific industry, sector or policy by looking at the United States’ labor market environment. “U.S. Labor Market Analysis” starts by summarizing statistics on Income, Labor Productivity, and General Conditions of Labor Market. Applied-Analysis can be either “Snapshots of the U.S. Economy” or historic trends (Time-series Analysis). Our clients can rely on a thorough and exhaustive data driven analysis that illuminates forecasting and economic decision-making. Clients may down-size or augment the scope of the research as to tailor it to their needs.

Get a quote from econometricus.com.
Email: giancarlo[at]econometricus[dot]com
Call: 1-917-825-5737

20 Data Sources and Surveys for Research on U.S. Economic Prospective.

If your research project involves aspects on the general prospective of U.S. Economy, here are some useful data sources, surveys and metrics that might illuminate insights for your research. Although there might be some discrepancies between what you narrowed as your research question, and the data sources showed below, chances are you will find a set of metrics that might capture a good proxy for your research topic.

Look through the list and then identify a possible match between your research question and the data source:

1. Texas Manufacturing Outlook Survey. Data source: Federal Reserve Bank of Dallas.
2. Fifth District Survey of Manufacturing Activity. Data source: Federal Reserve Bank of Richmond.
3. Kansas City Manufacturing Survey. Data source: Federal Reserve Bank of Kansas.
4. Manufacturing Business Outlook. Data source: Federal Reserve Bank of Philadelphia.
5. Empire State Manufacturing survey. Data source: Federal Reserve Bank of New York.
6. Durable Goods Manufacturers. Data source: U.S. Census Bureau.
7. Retail Trade, Monthly and Annual. Data source: U.S. Census Bureau.
8. Manufacturing Trade Inventories. Data source: U.S. Census Bureau.
9. Monthly Whole Sale Trade. Data source: U.S. Census Bureau.
10. Manufacturers’ shipments, Inventories and Orders. Data source: U.S. Census Bureau.
11. Corporate Profits. Data source: U.S. Bureau of Economic Research.
12. Tourism Sales. Data source: U.S. Bureau of Economic Research.
13. Services, annual and quarterly. Data source: U.S. Census Bureau.
14. Fifth District Survey of Services. Data source: Federal Reserve Bank of Richmond.
15. Chicago Fed National Activity Survey Economists. Data source: Federal Reserve Bank of Chicago.
16. Livingston Survey Economists. Data source: Federal Reserve Bank of Philadelphia.
17. E-Commerce. Data source: U.S. Census Bureau.
18. Survey of Professional Forecasters. Data source: Federal Reserve Bank of Philadelphia.
19. National Economic Trends. Data source: Federal Reserve Bank of St. Louis.
20. National Transportation Statistics. Data source: U.S. Bureau of Transportation Statistics

At Econometricus.com we help Social Science Researchers in understanding the economic situation of specific industry, sector or policy by looking at the United States’ general economic environment. We support Researchers by summarizing leading Economists Consensus about the current conditions of the U.S. Economy. Also, we do analyze empirical data on Manufacturing Activity, Trade and Commerce, and Service Sector. Applied-Analysis can be either “Snapshots of the U.S. Economy” or historic trends (Time-series Analysis). Our clients can rely on a thorough and exhaustive data driven analysis that illuminates forecasting and economic decision-making. Clients may down-size or augment the scope of the research as to tailor it to their needs.

Get a quote from econometricus.com.
Email: giancarlo[at]econometricus[dot]com
Call: 1-917-825-5737