Eric has been working to build, distribute, and strengthen the GAUSS universe since 2012. He is an economist skilled in data analysis and software development. He has earned a B.A. and MSc in economics and engineering and has over 18 years of combined industry and academic experience in data analysis and research.
The GAUSS dataframe, introduced in GAUSS 21, is a powerful tool for storing data. In today’s blog, we explain what a GAUSS dataframe is and discuss the advantages of making it a part of your everyday GAUSS use.
This blog provides a non-technical look at impulse response functions and forecast error variance decomposition, both integral parts of vector autoregressive models.
If you’re looking to gain a better understanding of these important multivariate time series techniques you’re in the right place.
We cover the basics, including:
What is structural analysis?
What are impulse response functions?
How do we interpret impulse response functions?
What is forecast error variance decomposition?
How do we interpret forecast error variance decomposition?
In today’s blog, you’ll learn the basics of the vector autoregressive model. We lay the foundation for getting started with this crucial multivariate time series model and cover the important details including:
Categorical variables offer an important opportunity to capture qualitative effects in statistical modeling. Unfortunately, it can be tedious and cumbersome to manage categorical variables in statistical software.
The new GAUSS category type, introduced in GAUSS 21, makes it easy and intuitive to work with categorical data.
In today’s blog we use real-life housing data to explore the numerous advantages of the GAUSS category type including:
Easy set up and viewing of categorical data.
Simple renaming of category labels.
Easy changing of the reference base case and reordering of categories.
Single-line frequency plots and tables.
Internal creation of dummy variables for regressions.
Proper labeling of categories in regression output.
Categorical variables have an important role in modeling, as they offer a quantitative way to include qualitative outcomes in our models. However, it is important to know how to appropriately use them and how to appropriately interpret models that include them. In this blog, you’ll learn the fundamentals you need to know to make the most of categorical variables.
Learn why TSPDLIB 2.0 is the easiest and most comprehensive time series and panel data unit root and cointegration testing package on the market. The tspdlib 2.0 package includes expanded functions for time series and panel data testing in the presence of structural breaks. In addition, TSPDLIB 2.0 is easier than ever to use with new implementation of default parameter settings, updated output printing, and automatic date variable detection.
In today’s blog, we look at how to save time and reduce errors using GAUSS’s new data management tools.
Using the quarterly real GDP dataset from the FRED database we explore GAUSS’s new data management tools.
In particular, we examine how to:
Maximum likelihood is a fundamental workhorse for estimating model parameters with applications ranging from simple linear regression to advanced discrete choice models. Today we learn how to perform maximum likelihood estimation with the GAUSS Maximum Likelihood MT library using our simple linear regression example.
We’ll show all the fundamentals you need to get started with maximum likelihood estimation in GAUSS including:
How to create a likelihood function.
How to call the maxlikmt procedure to estimate parameters.