Adding Hampel Psi Function to robustreg

I am currently working on adding the Hampel psi function to robustreg.

Pure R implementation:

psiHampel <- function(r, a, b, c) {
    psi <- NULL
    for (i in 1:length(r)) {
        if (abs(r[i]) <= a) {
            psi[i] <- r[i]
        } else if (abs(r[i]) > a & abs(r[i]) <= b) {
            psi[i] <- a * sign(r[i])
        } else if (abs(r[i]) > b & abs(r[i]) <= c) {
            psi[i] <- (a * (c - abs(r[i]))/(c - b)) * sign(r[i])
        } else {
            psi[i] <- 0

Rcpp/C++ implementation:

#include <Rcpp.h>
#include <math.h>
using namespace Rcpp;
using namespace std;

// [[Rcpp::export]]
NumericVector psiHampel_rcpp(NumericVector r, double a, double b, double c)
    int n = r.size();
    NumericVector y = clone(r);

    for (int i = 0; i < n; i++) {
        if (abs(r[i]) <= a) {
            y[i] = r[i];
        else if (
            abs(r[i]) > a && abs(r[i]) <= b) {
            if (r[i] > 0) {
                y[i] = a;
            else {
                y[i] = -a;
        else if (
            abs(r[i]) > b && abs(r[i]) <= c) {
            if (r[i] > 0) {
                y[i] = (a * (c - abs(r[i])) / (c - b));
            else {
                y[i] = (a * (c - abs(r[i])) / (c - b) * -1);
        else {
            y[i] = 0;
    return y;

Below is a ggplot2 graph of values a=2, b=4, c=8 where x ranges from -10 to 10:

all.equal(psiHampel(z,2,4,8), psiHampel_rcpp(z,2,4,8))
[1] TRUE

Both functions have the same result although the C++ implementation is much faster.

R: Baseball Pitching Wins Above Replacement and Salary

Are elite baseball pitchers worth their salaries? To investigate this I fitted linear and robust linear models using definition of wins above replacement. The universe of data is all pitchers in the year 2015 who started at least 20 games or as a reliever pitched 70 outs which is 298 players.

The distribution of war is below:

 Min.      1st Qu. Median Mean    3rd Qu. Max.
-1.79      0.2125  0.95      1.219   1.905     9.28

standard dev: 1.61
mad:  1.245

Distribution of Pitching WAR

Fitting a linear model to the data:

We can see that there is a linear relationship between salary and war as salary is significant.  The correlation between the two variables is ~.28 but the MSE is 2.4 which indicates the model isn't the best fit.  The linear model suggests that each additional $11,883,599 in player salary should buy a win over a replacement player.

Investigating a robust model:

The robust model is a better fit as MSE is significantly lower at 1.47.  The correlation between salary and war is slightly lower at .277.  The advantage of a robust model is it will minimize outliers by assigning lower weights.  The robust model suggests that each additional $14,065,063 in player salary should buy a win over a replacement player.

Below is a table of 2015 war, 2015 salary, and the upper prediction interval of war based on 2015 salary:

The top ten pitchers in 2015 all had better seasons than the upper end of the prediction interval based on their salary. 

R Text Mining: The Wells Report

wells report word cloud
Word Cloud Wells Report

Story: Patriots Deflated Footballs
Document to be text mined: Wells Report

How to Create Word Cloud using R:

Packages needed:
tm: Text Mining Package
RColorBrewer: ColorBrewer Palettes
wordcloud: Word Clouds

1) Convert PDF to text file using pdftotext

2) Clean the document and remove numbers, punctuation, symbols, and stop words.
 library(tm) #text mining   
 source<-DirSource("~/Text") #save text file(s) here   
 a<-Corpus(source, readerControl=list(reader=readPlain))   
 a <- tm_map(a, content_transformer(removeNumbers))   
 a <- tm_map(a, content_transformer(removePunctuation))   
 a <- tm_map(a, content_transformer(tolower))   
 a<- tm_map(a, stripWhitespace)   
 a[[1]] <- removeWords(a[[1]], stopwords("en"))   
 a<- tm_map(a, stripWhitespace)   
3) Examine the corpus and replace words if necessary. Since the Wells report was written by two parties, a lawfirm and Exponent some of the terms were inconsistent. This is how I changed "psig" to "psi" which were used interchangeably in the document:
 a[[1]]<- gsub( "psig" , "psi" , a[[1]])   
 a<- tm_map(a, PlainTextDocument) #neccessary after word replacement  
4) Create term document matrix and dataframe of keywords:
 tdm<- TermDocumentMatrix(a, control = list(minWordLength = 3))   
5) Create and format word cloud:
 library(RColorBrewer) #colors wordcloud   
 tdm.m <- as.matrix(tdm)   
 tdm.v <- sort(rowSums(tdm.m),decreasing=TRUE)   
 tdm.d <- data.frame(word = names(tdm.v),freq=tdm.v)   
 pal2 <- brewer.pal(8,"Dark2")   
 png("wells_report.png", width=8,height=8, units='in', res=400)   
 wordcloud(tdm.d$word,tdm.d$freq, scale=c(8,.2),min.freq=5, max.words=Inf,  random.order=FALSE, rot.per=.15, colors=pal2)   

R Regression: Comparing Speed Using lm() and RCPP

One of the problems of R is speed and memory. Below I compare three methods to perform multiple linear regression.

The built in R function is lm(). It is the slowest. A bare bones R implementation is which is substantially faster but still slow. The fastest method is to use Rcpp and RcppArmadillo which is the C++ Armadillo linear algebra library.

Using a 31400 x 4 design matrix a simulation is run to compare the three methods:

A simulation of 1000 multiple linear regressions using the R function lm() provides the below average system time:

> mean(s_lm) [1] 0.067614

A simulation of 1000 multiple linear regressions using the R function provides the below average system time:

> mean(s_lmfit) [1] 0.006888
This is an improvement of almost 9 times over lm()

A simulation of 1000 multiple linear regressions using the C++ implementation using Rcpp and RcppArmadillo code below:
 // [[Rcpp::depends(RcppArmadillo)]]  
 #include <RcppArmadillo.h>  
 using namespace Rcpp;  
 using namespace arma;  
 // [[Rcpp::export]]  
 arma::mat lm_rcpp(arma::mat X,arma::vec y) {  
 arma::vec b_hat;  
 b_hat = (X.t()*X).i()*X.t()*y;  
> mean(s_rcpp) [1] 0.002169

The Rcpp code is 30 times faster than the basic R lm() implementation!

Robust Regression Package for R

I wrote this package in 2006 when the major statistical software companies did not have a robust regression package available.  It has been downloaded over 10k times.  Using iteratively reweighted least squares (IRLS), the function calculates the optimal weights to perform m-estimator or bounded influence regression. Returns robust beta estimates and prints robust ANOVA table using either a Huber or bisquare function.

Recent changes make the structure of the arguments similar to glm() or lm() and speed has dramatically increased.

Download Robust Regression Package for R

*Updated 8/25/2015 version  robustreg_0.1-9
*R functions median(), mad() replaced with Rcpp equivalents
*R functions Huber and bisquare replaced with Rcpp equivalents

Three Card Poker Game for R

A few years ago I created a comprehensive simulation of the card game three card poker for a client in Las Vegas.  I recently converted the simulation into a simple command line game for R.

Three Card Poker Game for R (Linux)
Three Card Poker Game for R (Windows)

UPDATED 7/16/2015

Regression / Time Series BP Stock Model

The date of the oil spill in the Gulf of Mexico was April 20, 2010.  British Petroleum stock has declined 54.6% from the close of 4/20/2010 to close 6/25/10 which is 48 trading days.  Volume has gone from 3.8 million on April 20th to 93 million on June 25th with a max of 240 million on June 9th. I am going to fit a simple regression and time series model to predict BP stock price based solely on price and volume. 

Below is an ANOVA table of a regression between change in price as a dependent variable and change in volume as an independent variable.  The negative sign for change in volume indicates as volume increases, BP stock has declined over this time period and is statistically significant.


The residuals are auto-correlated as this is a time series.  I am going to fit an AR(2) model to the residuals of the regression.  An AR(2) turned out to be the best fit after examining various ARIMA models and GARCH / ARCH models. 

To examine the fit of the model, an estimator of variance needs to be created using the actual stock prices and the fitted values.  Mean squared error or MSE is calculated below:

For this model, MSE = 2.8.

For only looking at price changes and volume, this is a fairly accurate fit and predictor although there are a few trouble points.  Accuracy could be increased by adding to the model.  

Disclaimer:  Please note this is a demonstration and for academic use only. 

Benefits and Uses of Statistical Research

Identify Risk or Opportunity
Statistical research and data mining models can be used to identify both specific risk or opportunity to a company. Credit card companies use data mining models to identify possible fraudulent transactions or the probability of a consumer to default on a loan or miss a payment. Statistical research can also be used to identify high quality consumers that will minimize possible borrowing risks and maximize earnings.

Market Segmentation
Statistical research can be used to identify high quality consumers who are profitable to retain and low quality consumers who are not profitable to retain. High quality consumers who are at risk of canceling a service might respond to a particular marketing strategy with a higher probability rather than an alternative strategy. Statistical research and predictive modeling techniques can be used to maximize the retention of quality consumers.

Cross Selling
Companies today collect vast data concerning their customers including demographics and purchasing habits and behavior.  Statistical research and data mining models can be used to identify current consumers that are likely to purchase additional products from organizations that maintain and collect elaborate data.

Identify New Markets
Statistical research can be used to identify new markets and opportunities. A properly designed survey and sampling techniques can be used to identify consumers that are likely to purchase a brand new product and whether it would be profitable for a company to bring it to market.

Minimize Variability of a Process
Many times a company may be more concerned about the variability of a response around its mean rather than the actual mean response. Statistical research can be used to ensure homogeneity of product rather than products that are manufactured with different tolerance levels.  As an example, a company manufacturing semiconductors that need to fit into another company's motherboard would want to minimize variability in dimensions and thickness to ensure the products fit and are compatible rather than focus on the mean of product dimensions. 

Efficacy of a Process or Product using Design of Experiments
Proper experimental design including randomization, replication, and blocking (if necessary) can determine if a drug, diet, exercise program, etc. is effective versus another.  Choosing the correct design before the experiment and appropriate factors and interactions to investigate is critical.  Types of designs include completely randomized designs (CRD), CRD with blocking, split plot designs, full and partial factorial designs, etc.

Best Statistical Software Package

The best statistical software package is the tool that works best for the user based upon what needs to be done, cost, and what works best with existing software. Similar to using a particular tool, each package has its advantages and disadvantages over each other. The major packages I use are R, SPSS, and SAS.

R is a free open source statistical software language that is best used for writing custom statistical programs. It has the steepest learning curve but once you learn the language it is the most customizable and powerful out of the major packages. R has a very large community of statisticians who have written custom add on packages.

Here is an example of the power and flexibility of R.  I wrote a custom package that performs robust regression using iteratively reweighted least squares (IRLS).

The best way to run R is with EMACS which is a text editor and ESS (Emacs Speaks Statistics).

Fully customizable programming language
Powerful graphics
Interacts well with databases such as MYSQL
Can interact with C++ to speed up processes
Runs on Windows, Linux, and Macintosh operating systems

Steepest learning curve

SPSS is a GUI based statistical software package that is popular in the social sciences. Because it is GUI based, it is good for running quick analyses. SPSS was also recently acquired by IBM which should help the software in the long run.

GUI based program
Quick descriptive statistics capability
Most popular package in the social sciences
Good for cluster analysis
Runs on Windows, Linux, and Macintosh operating systems

Requires annual license
Limited statistical procedures vs R or SAS
Some procedures require purchase of add-on modules

PSPP is a free open source version of SPSS which can read and manipulate SPSS data files and perform most of the procedures of SPSS.

SAS is the software used in the statistical analysis of clinical pharmaceutical trials for submission to the FDA. SAS has been in existence since 1976 and was originally developed for use on mainframe computers. Eventually as personal computers became popular and faster, SAS was developed for PCs. SAS is very popular in experimental design and ANOVA.

SAS is also a programming language and allows users to write macros which are custom data steps and procedures.

Standard package of pharmaceutical industry
Programming language is flexible although not as flexible/powerful as R
Good for experimental design and ANOVA

Poor graphics capabilities although they have improved
Requires annual license
Limited statistical procedures vs R although more procedures than SPSS
Some procedures require purchase of add-on components

What is Statistical Consulting?

I am frequently asked the question what is "Statistical Consulting?" Statistical consulting is actually a very broad term as is the science of statistics.  Statistics is defined as the science of making effective use of numerical data.  A statistical consultant makes use of the science of statistics to solve a real world problem while also providing qualitative consultation and technical expertise.

The process of statistical consulting is varied depending on the research topic on hand.  A client needs to determine the research topic to explore.  Immediate issues to consider are the collection of data.  Does a survey need to be created?  How is the data going to be collected?  If consumers are going to be surveyed, a properly constructed survey and determining randomization techniques are imperative.  How many consumers must be surveyed to obtain the desired level of power?  An example could be a consumer products company that is looking to increase sales of a product line of toothpaste.  The company wants to create a marketing campaign to market to groups that are most likely to purchase the product.  Once consumers are properly surveyed the data needs to be analyzed.  A good consultant needs to be able to work with large datasets as data storage and management is an extremely important part of the process. After determining data methods, the consultant needs to be able to work with a statistical software package to manipulate the data, make inferences, and conclusions. When using a package it is important to use the package most appropriate for the organization as one package may be a better fit based on many factors.  After making conclusions, the consultant needs to be able to present and implement the results to the company and into their technology process.  Although this process is very quantitative, there is an art component as well which comes from being able to understand the goal of the client and successfully relate the science of the results.  Cost is always an issue as well with a study and determining the power.  It may not be cost effective to survey thousands of consumers and in situations like this the consultant needs to determine the best solution to maximize the accuracy of the study.

Another example is a utility company that needs to know how much natural gas they are going to need on a daily basis to heat homes in the winter time. If the company has too much supply it needs to be stored which is an added cost. Too little supply and it is possible to have a shortage. An optimal amount needs to be determined depending upon factors such as temperature, wind speed, and average household usage.

I've worked on many different types of projects in many different industries but specialize in building predictive models for decision making processes. A predictive model is a mathematical or statistical formula or process that takes a variety of inputs and allows an end user to make predictions about an event, product, or situation under certain assumptions. Predictive modeling allows a company to maximize or minimize a process and identify both opportunity and risk. Often, it is more important to identify risk than opportunity.