top of page

Why Generable

Oct 27, 2017 | Eric Novik

Article Tags: Generable , Stan



The more important a decision the more “Bayesian” it is apt to be.
—- Irving J. Good

At Generable, formerly Stan Group, we are focused on productizing state-of-the-art generative models for making decisions. We are currently working on a broad class of survival (time to event) and econometric models that encode generative structure that cannot be learned from data alone. These models are special, because they enable us to simulate counterfactual states of the world weighted by their respective probabilities.


We started the company in 2016 to learn about the viability of Stan for industrial use and to understand how to increase the adoption of Bayesian methods in the industry. More than a year later, after observing how Stan is being used inside major league baseball teams, a prominent Internet retailer, a publishing house, an online sportsbook, and a pharmaceutical company, we have a pretty good idea of the types of problems we are able to solve.

Stan’s Industrial Strengths

  1. Companies are using Stan to solve difficult problems where traditional machine learning (ML) tools either failed or did not produce good results. For example, a fast growing startup, Lendable, is using Stan to price portfolios of microloans in Sub-Saharan Africa with remarkable accuracy and Novartis is using Stan to design more effective clinical trials.

  2. Stan is great for modeling high dimensional problems with thousands and even hundreds of thousands of parameters. Whereas big data algorithms are trying to scale with the number of knowns (data), Stan scales with the number of unknowns. It’s about big models, not big data.

  3. Stan’s syntax allows the programmer to express models of arbitrary complexity – it is simply not realistic to expect canned algorithms to offer companies competitive advantage. For example, pharma companies are using Stan’s differential equation solvers to fit models of drug diffusion and take the mechanism of action into account.

  4. Stan is infinitely fast. Compared to the previous generation of Bayesian samplers, Stan’s HMC-NUTS algorithm is able to produce accurate results where the old samplers do not converge in finite time.

  5. We see a lot of promising applications where features derived using ML techniques are used as inputs to a Bayesian model at which point the utility of the features and the model can be evaluated. We are sceptical of using the output of classical ML algorithms directly for making decisions.

Stan’s Industrial Weaknesses

  1. Stan is infinitely slow, compared to popular ML algorithms. Some of our models train for days. This is a limitation, but it is not a fair comparison. Machine learning uses optimization which tries to find a single, best point in a multi-dimensional parameter space. Stan and other fully Bayesian systems are attempting to characterize the whole space, so we would expect it to take longer.

  2. As a corollary, Stan is not a good choice for real time applications and applications where the data generating process is completely unknown or unknowable.

  3. Stan is difficult to learn for non-statisticians and non-programmers. This is a shame, since Bayesian learning, the only known system of inference that accurately quantifies all tractable sources of uncertainty, holds enormous promise for decision makers.

  4. Since Stan’s output is typically not a single prediction, but rather a distribution over all the unknowns, it is not obvious to non-specialists how to use Stan’s output to generate predictions and make decisions. For example, if we are pricing books, Stan’s output will contain the joint distribution of elasticities of demand for all books in the sample. If our objective is to predict quantities sold for a new book at difference prices, we have to do some additional work.

  5. It takes a lot of administrative overhead to fit lots of separate models in parallel and keep track of which data belongs to which model and which posterior. In other words, administering the modeling workflow is hard and error-prone.

Today, if you want a high quality model for making a decision about important things like drug efficacy and safety, you have to hire skilled statisticians and engineers familiar with statistical inference, the subject matter, probabilistic programming, and communications. By productizing a class of generative models for specific use, we are envisioning that individuals and institutions will make better calibrated decisions, faster.


Productizing models and making them ready for decision making is hard work. To do it, we are making some infrastructure improvements to make it easier to run Stan in production.


Feature

Prediction API/UI framework built on top of posterior distributions.


Why Do It

A model and a posterior distribution is of no value if it is not being used for decision making. We want our models to be used by lots of people. As a side effect, increased use will help improve the models.


Feature

Model and function library.


Why Do It

The library will contain a curated set of models and functions with known performance characteristics.


Feature

Tracking which data belongs to which model, and to which posterior.


Why Do It

For regulated industries it is important to be able to reproduce the analysis and trace the provenance of the posterior distribution. It is also a good practice in general.


Feature

Allowing the user to fit many Stan models simultaneously in parallel, and scale the compute nodes as needed.


Why Do It

Experienced Stan programmers routinely iterate 20 to 50 models for each problem. We are planning to accelerate that process.


Feature

Adaptive termination (for cause and for convenience) and soft restart.


Why Do It

Some of our big models sample (train) for hours, days, and sometimes longer. It is incredibly wasteful to let them keep sampling if they have a small chance of converging or if the draws appear to be statistically well behaved.


Feature

Streaming posterior draws to a high performance database.


Why Do It

This is related to the provenance question, but here we mostly want high performance for downstream applications. Training models takes time, but predictions must be fast.


We are currently running a few pilots with beta customers. If you would like to join the beta program, please, fill out the [intake form](http://www.generable.com/contact/).

34 views0 comments

Recent Posts

See All
bottom of page