San Francisco, California, United States
Contact Info
10K followers
500+ connections
About
Activity
-
ParadeDB is looking to hire someone with experience building columnar query processing, like DataFusion/Apache Spark/Redshift/DuckDB/Snowflake/etc…
ParadeDB is looking to hire someone with experience building columnar query processing, like DataFusion/Apache Spark/Redshift/DuckDB/Snowflake/etc…
Liked by Holden Karau
-
Who's building Open Science / Open Source stuff for Cancer and Rare Disease? Specifically tools, platforms and infrastructure that patients can use?…
Who's building Open Science / Open Source stuff for Cancer and Rare Disease? Specifically tools, platforms and infrastructure that patients can use?…
Liked by Holden Karau
-
Can DuckDB provide a way to access your data cross clouds and data sources? YES! The power of VIEWs in DuckDB makes it a portable catalog!
Can DuckDB provide a way to access your data cross clouds and data sources? YES! The power of VIEWs in DuckDB makes it a portable catalog!
Liked by Holden Karau
Experience & Education
Publications
-
Scaling Python with Dask: From Data Science to Machine Learning
O'Reilly
Modern systems contain multi-core CPUs and GPUs that have the potential for parallel computing. But many scientific Python tools were not designed to leverage this parallelism. With this short but thorough resource, data scientists and Python programmers will learn how the Dask open source library for parallel computing provides APIs that make it easy to parallelize PyData libraries including NumPy, pandas, and scikit-learn.
Authors Holden Karau and Mika Kimmins show you how to use Dask…Modern systems contain multi-core CPUs and GPUs that have the potential for parallel computing. But many scientific Python tools were not designed to leverage this parallelism. With this short but thorough resource, data scientists and Python programmers will learn how the Dask open source library for parallel computing provides APIs that make it easy to parallelize PyData libraries including NumPy, pandas, and scikit-learn.
Authors Holden Karau and Mika Kimmins show you how to use Dask computations in local systems and then scale to the cloud for heavier workloads. This practical book explains why Dask is popular among industry experts and academics and is used by organizations that include Walmart, Capital One, Harvard Medical School, and NASA.
With this book, you'll learn:
What Dask is, where you can use it, and how it compares with other tools
How to use Dask for batch data parallel processing
Key distributed system concepts for working with Dask
Methods for using Dask with higher-level APIs and building blocks
How to work with integrated libraries such as scikit-learn, pandas, and PyTorch
How to use Dask with GPUsOther authorsSee publication -
Scaling Python with Ray
O’Reilly
Serverless computing enables developers to concentrate solely on their applications rather than worry about where they've been deployed. With the Ray general-purpose serverless implementation in Python, programmers and data scientists can hide servers, implement stateful applications, support direct communication between tasks, and access hardware accelerators.
In this book, experienced software architecture practitioners Holden Karau and Boris Lublinsky show you how to scale existing…Serverless computing enables developers to concentrate solely on their applications rather than worry about where they've been deployed. With the Ray general-purpose serverless implementation in Python, programmers and data scientists can hide servers, implement stateful applications, support direct communication between tasks, and access hardware accelerators.
In this book, experienced software architecture practitioners Holden Karau and Boris Lublinsky show you how to scale existing Python applications and pipelines, allowing you to stay in the Python ecosystem while reducing single points of failure and manual scheduling. Scaling Python with Ray is ideal for software architects and developers eager to explore successful case studies and learn more about decision and measurement effectiveness.
If your data processing or server application has grown beyond what a single computer can handle, this book is for you. You'll explore distributed processing (the pure Python implementation of serverless) and learn how to:
Implement stateful applications with Ray actors
Build workflow management in Ray
Use Ray as a unified system for batch and stream processing
Apply advanced data processing with Ray
Build microservices with Ray
Implement reliable Ray applications -
Kubeflow for Machine Learning
O'Reilly
If you're training a machine learning model but aren't sure how to put it into production, this book will get you there. Kubeflow provides a collection of cloud native tools for different stages of a model's lifecycle, from data exploration, feature preparation, and model training to model serving. This guide helps data scientists build production-grade machine learning implementations with Kubeflow and shows data engineers how to make models scalable and reliable.
-
High Performance Spark
O'Reilly
Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources.
Ideal for software engineers, data engineers, developers, and system administrators working with…Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources.
Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing.
With this book, you’ll explore:
How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure
The choice between data joins in Core Spark and Spark SQL
Techniques for getting the most out of standard RDD transformations
How to work around performance issues in Spark’s key/value pair paradigm
Writing high-performance Spark code without Scala or the JVM
How to test for functionality and performance when applying suggested improvements
Using Spark MLlib and Spark ML machine learning libraries
Spark’s Streaming components and external community packagesOther authorsSee publication -
Learning Spark
O'Reilly
The Web is getting faster, and the data it delivers is getting bigger. How can you handle everything efficiently? This book introduces Spark, an open source cluster computing system that makes data analytics fast to run and fast to write. You’ll learn how to run programs faster, using primitives for in-memory cluster computing. With Spark, your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce.
Written by the developers…The Web is getting faster, and the data it delivers is getting bigger. How can you handle everything efficiently? This book introduces Spark, an open source cluster computing system that makes data analytics fast to run and fast to write. You’ll learn how to run programs faster, using primitives for in-memory cluster computing. With Spark, your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce.
Written by the developers of Spark, this book will have you up and running in no time. You’ll learn how to express MapReduce jobs with just a few simple lines of Spark code, instead of spending extra time and effort working with Hadoop’s raw Java API.
Quickly dive into Spark capabilities such as collect, count, reduce, and save
Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
Learn how to run interactive, iterative, and incremental analyses
Integrate with Scala to manipulate distributed datasets like local collections
Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
Use other languages by means of pipe() to achieve the equivalent of Hadoop streamingOther authorsSee publication -
Fast Data Processing With Spark
packt
Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.
Fast Data Processing With Spark covers…Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.
Fast Data Processing With Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.
Fast Data Processing With Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.
We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).
Courses
-
Compilers
CS444
-
Real Time Operating Systems
CS452
Projects
-
Spark Testing Base
You've written an awesome program in Spark and now its time to write some tests. Only you find yourself writing the code to setup and tear down local mode Spark in between each suite and you say to your self: This is not my beautiful code.
-
Sparkling Pandas
-
SparklingPandas aims to make it easy to use the distributed computing power of PySpark to scale your data anlysis with Pandas.
-
Fast Data Processing with Spark
-
Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.
Other creatorsSee project
More activity by Holden
-
It's not easy to find time for learning -- you have to proactively carve it out. Sol's advice is right on. If you can do so as a team, even better --…
It's not easy to find time for learning -- you have to proactively carve it out. Sol's advice is right on. If you can do so as a team, even better --…
Liked by Holden Karau
-
I feel incredibly privileged to have been brought onto the team at Writer this year. The passion and intelligence of my colleagues has forced me to…
I feel incredibly privileged to have been brought onto the team at Writer this year. The passion and intelligence of my colleagues has forced me to…
Liked by Holden Karau
-
Join us for Day 2 of #NuxtNation24 with Vue School
Join us for Day 2 of #NuxtNation24 with Vue School
Liked by Holden Karau
-
I'm pitching this week at @NextUp/Ne互tUp and you're invited! We'll be discussing #ResponsibleAI, #AIBias, #AIEthics and many more topics in #Data and…
I'm pitching this week at @NextUp/Ne互tUp and you're invited! We'll be discussing #ResponsibleAI, #AIBias, #AIEthics and many more topics in #Data and…
Liked by Holden Karau
-
It's 2024. LinkedIn is overflowing with thought leaders proclaiming that AI will only continue to get better and cheaper, if we just wait 6 months.…
It's 2024. LinkedIn is overflowing with thought leaders proclaiming that AI will only continue to get better and cheaper, if we just wait 6 months.…
Liked by Holden Karau
-
As #GenAI projects move from POC to production, Databricks customers are shifting from deploying single models to leveraging AI agent systems. Naveen…
As #GenAI projects move from POC to production, Databricks customers are shifting from deploying single models to leveraging AI agent systems. Naveen…
Liked by Holden Karau
-
Every year I promise to myself not to travel to that overcrowded event again. Every year there's someone to make me break that promise. Every year…
Every year I promise to myself not to travel to that overcrowded event again. Every year there's someone to make me break that promise. Every year…
Liked by Holden Karau
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More