Bayesian Scientific Computing

from PK

to Patient Utilities

Started by the core developers of the Stan language, Generable is the leading provider of Stan-based model development, optimization, and training as well as a HIPAA-compliant computational cloud called BiOS.
How predictive is my new biomarker? What attributes differentiate responders from non-responders? How fast will a patient progress on or off treatment? What is the probability that my new treatment is better than the competitor or standard of care? What is the best dose for a specific treatment or a specific person? These are some of the questions we are working on with leading Pharma and Biotech companies.

How it works

Our research focuses on connecting low-level measurements, such as drug concentrations, tumor size, and ctDNA, to clinically meaningful outcomes, including PFS/OS and various types of adverse events.

What makes us different?

1

Explainability and Transparency

Modern AI systems often lack explainability and transparency. In contrast, our models are defined explicitly through code rather than learned from data, making them inherently transparent and explainable. If there's ever a question about why a particular prediction was made, we can directly trace it back to the specific part of the model responsible for that output.
2

Algorithms

Bayesian inference has long been appealing but was historically limited by heavy computational demands and poor scalability. However, recent advances in computational statistics — such as dynamic Hamiltonian Monte Carlo (HMC) with the No-U-Turn Sampler (NUTS) and Pathfinder Variational Inference — enable us to fit models with hundreds of thousands of parameters. Our team’s expertise further helps address some of the limitations of these methods, especially in complex hierarchical models.
3

Calibration and Predictive Accuracy

Bayesian methods support well-calibrated predictions by treating uncertainty as a core component of inference. Instead of producing single-point estimates, they generate full probability distributions over parameters and outcomes. Coupled with probabilistic programming languages like Stan, our models tend to produce more accurate predictions in and out of sample, while preserving calibration.
4

Small Data

Big data is all the rage, but what do you do when you only have a few patients and few observations? Our models work well in the small data regime, such as early clinical trials and rare diseases, where we need to take advantage of prior knowledge from previous studies, scientific publications, and other external data sources.

Discover our latest blog posts