## Notes for Generative AI learning, part 2

This post is for chapter 5 (Autoregressive Models) and chapter 6 (Normalizing Flow Models).

This post is for chapter 5 (Autoregressive Models) and chapter 6 (Normalizing Flow Models).

Scratch notes when reading the O’Reilly book of ‘Generative Deep Learning, 2nd Edition’. This post is for the first 4 chapters.

Learning notes from the Coursera course on Large Language Models (LLMs)

Learning notes from the Coursera course on Large Language Models (LLMs)

A collections of notes of Reinforcement Learning, as I am going through the Coursera specialization: Fundamentals of Reinforcement Learning. Hopefully this w...

A collections of notes of Reinforcement Learning, as I am going through the Coursera specialization: Fundamentals of Reinforcement Learning. Hopefully this w...

A collections of notes of Reinforcement Learning, as I am going through the Coursera specialization: Fundamentals of Reinforcement Learning. Hopefully this w...

A series of study notes of optimization, including convex optimization, linear programming, linear simplex method, and KKT conditions.

Summary of Coursera MLOps Course 1.

Here we look into a good resource of practicing good machine learning design patterns.

Here we look into a good resource of practicing good machine learning design patterns.

Multi armed bandit is such a classic problem, here let’s implement some simple policies from ground up to address this problem.

Egg drop soups are delicious, dropping eggs can also be fun.

How to maximize your chance to find the best candidate, apartment, or even soulmate, only if the world can be modeled simply.

How can one update the mean, variance, and median of a long list of numbers, after a new element is added?

How can one relate a seemingly obivous physics concept to linear algebra, and then to a data science problem.

For classification problems, sometimes we care about the narrative of the predicted scores more than the predicted class. But the predicted scores can not al...

Gradient descent is one of the most important tools in machine learning, but how hard can it be?

Why Should I Trust Your black-box model? How about breaking open the black-box model, at least locally.

Fitting one single Gaussian distribution is trivial, but how about more than one?

k nearest neighbor search can be time consuming with brute force, how can we do better?

Now we know what a good causal model look like, the next question is, how to build one?

We all know that correlation does not imply causation. While we can observe correlations, how can we go about study causations?

How similar is my top greatest movies of all time list to yours?

With the covid-19 pandemic distruping everyday life across the globle, a phrase we hear a lot is to “flatten the curve”. What is the science behind that?

Multi armed bandit is such a classic problem, here let’s implement some simple policies from ground up to address this problem.

Egg drop soups are delicious, dropping eggs can also be fun.

How to maximize your chance to find the best candidate, apartment, or even soulmate, only if the world can be modeled simply.

How to speed up the time from between model development to model deployment.

How can one update the mean, variance, and median of a long list of numbers, after a new element is added?

How can one relate a seemingly obivous physics concept to linear algebra, and then to a data science problem.

For classification problems, sometimes we care about the narrative of the predicted scores more than the predicted class. But the predicted scores can not al...

Gradient descent is one of the most important tools in machine learning, but how hard can it be?

Why Should I Trust Your black-box model? How about breaking open the black-box model, at least locally.

Fitting one single Gaussian distribution is trivial, but how about more than one?

k nearest neighbor search can be time consuming with brute force, how can we do better?

How similar is my top greatest movies of all time list to yours?

Two diagonally different topics, but equally enlightening books.

Here we look into a good resource of practicing good machine learning design patterns.

Here we look into a good resource of practicing good machine learning design patterns.

Statistics, how to build intuition to see everything from a different angle.

Self-improvement, in difference aspects.

Math, debt, and Amazon.

How to lie, to get rich, with statistics, and to play people’s mind.

Armed with seemingly omnipotent technical marvels that are destined to solve all the problems should they arise, there are still questions we need to think c...

Pain is inevitable. Suffering is optional.

A quick primer on how to leverage the rich data from the US Census Bureau.

A series of study notes of optimization, including convex optimization, linear programming, linear simplex method, and KKT conditions.

With the covid-19 pandemic distruping everyday life across the globle, a phrase we hear a lot is to “flatten the curve”. What is the science behind that?

Recap of an excellent 3-day workshop on Docker and Kuberentes.

There is no built-in pivot function in Hive, but one can still do it with relative ease.

A runner’s view on this database concept.

Hello World, again!

How can one relate a seemingly obivous physics concept to linear algebra, and then to a data science problem.

One of the most well-known effects of quantum mechanics is the uncertainty principle. A direct consequence is that it that imposes a fundamental limit on the...

We are all quite familiar with a simple harmonic oscillator, such as a frictionless pendulum, or a mass-on-a-spring system. Since there is no friction (or en...

What a poetic opening to your talk! This is what Sir John Pendry did when he gave a talk at the IIN Symposium few months back. He then went to describe light...

We are often spoiled by the amount of data at our disposal, but it is still easy to make mistakes when making statistical inferences if we exploit too much.