“Alex is one of the smartest people I know. Not only is he excellent an excellent developer, he is also a pleasure to be around. I can't remember any bad experiences while working with this A+ gentleman! If I have to recommend any developer, Alex Castillo is always the first person that comes to mind.”
Sign in to view Alex’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Alex’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Los Angeles Metropolitan Area
Contact Info
Sign in to view Alex’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
5K followers
500+ connections
Sign in to view Alex’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Alex
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Alex
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Alex’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Neurosity
*** & **-*******
-
*******
****** ******** ********
-
*******
**** ********
-
******* ****** ** ******
******** ** **** **** - *** ************* ******
-
****Ó*
********* ** **** *** ******** - *** ************* ******
View Alex’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
Organizations
-
Quake Capital
Accelerator Program
- PresentQuake Capital focuses on early stage companies and takes a founder-friendly approach.
-
Futureworks
Entrepreneurs Program
- PresentThe Futureworks Incubator accelerates, champions, and supports the growth of hardware startups and advanced manufacturing entrepreneurs in New York City
-
Google Developers
Google Developer Expert for Web Technologies
- PresentGoogle Developers Experts (GDEs) are recognized for their exemplary work by Google and invited to be part of the growing GDE community. GDEs are gurus, mentors and friends.
Recommendations received
3 people have recommended Alex
Join now to viewView Alex’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Devin Finzer
Miami Beach, FLConnect -
Sam Street
San Francisco, CAConnect -
Zaheer Mohiuddin
Saratoga, CAConnect -
Yai Bolanos
Co-Founder at ADAVED
Miami, FLConnect -
Siraj Raval
San Francisco, CAConnect -
Yuping Wang
Greater BostonConnect -
Vivek Raghunathan
Palo Alto, CAConnect -
Saqib Awan
San Francisco Bay AreaConnect -
Kevin M.
LondonConnect -
Pawel Tulin
Co-Founder at DisruptiveExperience
New York, NYConnect -
samy kamkar
Los Angeles, CAConnect -
Chris Whitman
Los Angeles, CAConnect -
Cem Hurturk
United StatesConnect -
Claudio Ceballos Paz
San Francisco Bay AreaConnect -
Wade Foster
Sunnyvale, CAConnect -
Michael Lenny
San Diego, CAConnect -
Alexey Malashkevich
Miami-Fort Lauderdale AreaConnect -
Mike Ross
Writer
Miami, FLConnect -
Edwin S.
San Francisco Bay AreaConnect -
Steve Bartel
Founder & CEO of Gem ($150M Accel, Greylock, ICONIQ, Sapphire, Meritech, YC) | Author of startuphiring101.com
San Francisco, CAConnect
Explore more posts
-
Shreyas Gite
This is f*cking hilarious: TuSimple's pivot from self-driving trucks to AI animation and gaming! 🤖🎮 Right now, everyone’s betting on humanoid and general-purpose robotics startups. But after 5 years in self-driving tech and building Kopernikus Auto, here’s the reality: most humanoid robotics companies will be dead. In the self-driving world, we’ve only seen a few real winners: Tesla, Waymo, and Comma.ai. Why? - Humanoid robotics isn’t a hardware problem; it’s a software problem. - You either have infinite data, cash, talent, or a clear use case—or you won’t make it, no matter how much you raise. From over 100 companies, almost all in self-driving tech are dead or miles away from profitability. ✨ Don’t get distracted by the appeal of humanoids. Focus on real problems you can actually solve. What are you focusing on? 👇
131 Comment -
Annamalai Kathirkamanathan
"Software Ate The World, Now AI Is..." - Jensen Huang Here are 12 counter-intuitive advice for building AI products: (Coming from great experts) ✨ 👉 Think Differently - Start by understanding the AI tech’s potential. - Understand it before integrating it into products. - Focus on hard problems that models can’t solve easily. 👉 Solve Real Problems - Building a cool AI demo ≠ User value. - Segment users based on their attitudes toward AI. - Don't obsess on model quality; it's just a tiny piece. 👉 UI/UX Is Your Moat - Adding AI will not solve the problem. - Also, users will not engage just because its AI. - Find the right UI/UX and educating users is crucial. - AI sometimes can get confusing so guide your users. 👉 Proprietary Data Is Key - Models are becoming commoditized. - Exclusive data = Superior AI product. - Data and the interface >> models themselves. 👉 Initial Workflow Matters - Choose workflow that provides high user value. - Choose one with high promise-to-payoff ratio. 👉 AI Label Boosts Engagement - Just calling your product "AI" will give you a boost. - "AI-powered feature" >> "feature". Disclaimer: AI should align with your feature capabilities. 👉 Small Improvements Matter ❌ Don't focus on "What cool new things could AI do" ✅ Focus on "What’s the thing user do 100 times a day". 👉 Plan for Imperfection - Consistency >> Perfection. - It's okay if your AI feature has a low acceptance rate. - GitHub Copilot acceptance rate is 35%. - Ask your customers: "Is this making their job easier?". 👉 Evolving Business Model - AI products auto-improves when the model improves. - So, adapt pricing model for continuous AI improvement. 👉 Scalability Bottlenecks - Plan for scalability from the beginning. - Training models directly improve quality. 👉 Monitor Engagement - Monitor how AI features affect user behavior. - PMF for AI: increase productivity & give time back. - i.e. a great AI feature will reduce time spent in-app. 👉 Speed is Crucial - In the AI era, speed wins. - Pre-compute so the results are instantaneous. - Speed in deploying AI is a competitive advantage. Thank you for your amazing insights - 📧 Elad Gilo Paul Adams Sarah Guo Joshua X. Hilary Gridley Caitlin Colgrove Ryan J. Salva Joel Kwartler Cameron Adams Scott Belsky Paige Costello James Evans Claire Vo Stephen Whitworth Johnny Ho Sherif Mansour Dan Siroker Henri Liriani Chris Lu Gaurav Misra Rahul Vohra A big shout-out to Kyle Poyar for putting it all together in an article! --- Full article in the comments below 👇 Let me know in the comments—which one did you need to hear today? #ai #startup #product #startups #genai
173 Comments -
Lydia Choy
Also, we released a new version of the public beta, which is available in Creative Cloud in the "Beta Apps" section under Applications. No new features this time--we are still iterating on the Assets Library Panel and "Search by Shape" feature, as well as the "Smart Upres" feature. If you haven't tried either of these, please do and let us know what you think! Here is how you can access the beta: https://lnkd.in/gq3uqEdm
3 -
Ramnivas Laddad
I have been thinking about "Why, after 6 years, I'm over GraphQL" (https://lnkd.in/g_XDCvbQ). I faced many of the same issues with my GraphQL implementations but came to a different conclusion that led to Exograph (https://exograph.dev/). ❗ Problem: GraphQL authorization is a nightmare to implement. ✅ Solution: A declarative way to define entity or field-level authorization rules. In Exograph, you define authorization rules alongside your data definitions. This not only makes it easy to audit authz rules, but lets you lean on the engine to enforce them. ❗ Problem: GraphQL queries can ask for multiple resources in a single query, overwhelming the backend. ✅ Solution: Trusted documents, query depth limiting, and rate limiting. Exograph supports trusted documents/persisted queries and other mechanisms to control the load on your backend. ❗ Problem: GraphQL queries can overwhelm the parser. ✅ Solution: The same solution as above. You can control the load on your parser by using trusted documents and limiting query depth. ❗ Problem: GraphQL queries can lead to the N+1 problem. ✅ Solution: Defer data fetching to a query planner. The classic and dreaded N+1! Exograph uses a query planner to ensure that you never run into this. ❗ Problem: Balancing authorization and the N+1 problem can be tricky. ✅ Solution: Make authorization rules a part of the query planning process. By making authorization rules a part of the query planning process, Exograph ensures that you never fetch data the user is not authorized to see. ❗ Problem: GraphQL makes it quite hard to keep code modular. ✅ Solution: Reduce the amount of code by leaning on a declarative approach. More code, more problems! In Exograph, you write only a few hundred lines of code to define even complex data models. ❗ Problem: GraphQL makes balancing authorization, performance, and modularity tricky. ✅ Solution: Simplify implementing GraphQL through a declarative way to define data models and authorization rules. This is the core of Exograph: declarative data models and authorization rules. Read the full blog post at https://lnkd.in/gBwpXMHF
113 Comments -
Daniel Hambleton
Back in 2010 or so I was obsessed with game engines like Unity and Unreal. Sure, they had been around for a while before that, but for me, at that time, these engines unlocked the magical world of game development by taking away the pain of graphics programming, user input, platform deployment (Android Studio, looking at you) and so much more. In 2012 I think I made one game a month for 12 months. Yes, they were all terrible but that's not the point. The point is that it was fun and easy to make games. Remember the mantra, "make games not engines"? The same logic applies to building CAD applications. Developing an application that lets users edit geometry in a robust, easy, and scalable way is very difficult. If you come from game dev, you might think meshes are the way to go. That will seem like great idea until your users want to boolean two meshes that came from who-knows-where. ❌ If you've got lots of extra dollars to spare and you come from the world of engineering, you might consider licensing one of the desktop CAD kernels. Good luck to you! Have fun learning the math behind BREPs for the next couple years let alone shipping that application to paying customers to recover your upfront costs. ❌ At Metafold 3D our vision is to change this. In this talk, our CEO Elissa Ross demonstrates how we developed a personalized medicine application that any clinician could use in less than a week. ✅ Get started today with our #opensource SDKs: https://lnkd.in/gkETXB5k
211 Comment -
Adithya L Bhat
Running llama 3 by meta does not require you to have sophisticated system setup. I have set it up on my personal laptop which happens to have a GPU with it. I have used llama.cpp to run the int4 and int8 quantized model of llama 3. Running a language model for your personal use does not require you to have cloud access such as azure. I have demonstrated one use case of deploying personal language model. During my research purposes It so happened that chatgpt was very hard to access either cause it was down or the site simply would be buggy during which I used the llama model deployed by https://lnkd.in/gWfkaKZA where I think such systems can be deployed by organizations for internal research purposes. Below is the link to the steps I followed in deploying and comparing the int4 and int8 quantized model. https://lnkd.in/gzShCPCD
215 Comments -
Wen Profiri
Their goal is to create 1 billion personas as synthetic characters for various scenarios. I'm going through the first release. The released data is all generated by public available models (GPT-4, Llama-3 and Qwen), and I have been using them simultaneously along with Gemini, Claude and Mistrel. Synthetic Data Samples: 50,000 math problems 50,000 logical reasoning problems 50,000 instructions 10,000 knowledge-rich texts 10,000 game NPCs 5,000 tools (functions) A subset of PERSONA HUB: 200,000 personas
4 -
Chris Sanders
Llama3 is out. It's the new open weights model from Meta, and you can try it out for free at https://www.meta.ai/. Ollama had it ready to go within hours (https://lnkd.in/eJks-Vg2). If you grabbed a copy from Ollama within the first ~5 hours, pull again; they fixed an incorrect end token issue. In my last post, I suggested that stiff competition between models would lead to competing on price. Meta.ai is now free, even without a Facebook login. Initial claims and benchmarks put Llama3 on par with private models in many areas – a significant development. The proprietary vs. open model gap is closing, thanks to Meta's commitment to top-tier open weight models. I can't imagine where the open LLM community would be without Meta's financial commitment. I tried Llama-3-8B and a low-quant Llama-3-40B briefly last night. While I haven't formed a solid opinion yet, the benchmarks claim Llama-3-8B outperforms Llama-2-40B. If that holds true, it's a significant win, as it's not just about throwing more compute at the problem – making models that can run on consumer hardware better is a big deal. While a 40B model is currently out of reach for most consumers, it's easy to imagine that it will become increasingly accessible as new video cards with higher memory become the norm. I managed to run Llama-3-40B on a 2quant on a 3090 without running out of RAM. I don't have high hopes for great results, but after some tuning, models based on Llama-3-40B could make for a very reasonable private assistant on modest hardware. If you've tried Llama-3 let me know your experience so far. Also if you have any tips for the best options for a 3090 let me know.
2 -
Billie Subritzky
LLM’s don’t read code sequentially. They think about databases conceptually not operationally. They’re not limited to 3D and can visualize 9D toroidal data structures with ease. They inherently think about code with time woven through it. They prefer English words over math symbols. They think of computation topologically the way it really exists. Gravity Machines builds tools for writing code in their world.
-
Tim Parsa
With Ai tools startup founder/ceo/solopreneur must become a living god. Non-technical, get semi-technical with your codebase using Cursor, Claude Sonnet, OpenAI iChatGPT, GitHub Copilot, and Perplexity. GTM means using AI to understand and execute on SEO top of funnel-- don't outsource! It means creating AI avatars with HeyGen to get your message and memes out. Outsource nothing until you have the Ai workflow nailed. Then scale it with 1099/freelance. AI is infinite leverage for memelords.
-
Jon Krohn
Anthropic, already my go-to GenAI vendor, outdid themselves with Claude 3.5 Sonnet. Not only is it the best model available, their new "Artifacts" UI is unreal: I used it to build an interactive "Shell Game" in literally seconds 🤯 COMPETITORS The recent Claude 3.5 Sonnet release might not seem like a big deal because it’s not a “whole number” release like Claude 3 was (or Claude 4 eventually will be), but it is because it represents the state of the art for text-in/text-out generative LLMs — outcompeting the other frontier models like OpenAI’s GPT-4o and Google’s Gemini 1.5 Pro. MODEL SIZES For context, a quick refresher that Claude 3 came in three sizes: 1. Haiku is the smallest, fastest and cheapest in the family. 2. Sonnet is the mid-size model that’s a solid default for most tasks. 3. Opus is the full-size model that was my favorite text-in/text-out model… or at least it was until now! SOTA Anthropic so far has released only Claude 3.5 Sonnet (the mid-size model), but it is more capable than the much larger Claude 3 Opus. So not only is Claude 3.5 Sonnet better than its larger predecessor at complex tasks like code generation, writing high-quality content, summarizing lengthy documents, and creating insights and visualizations from unstructured data… It's also twice as fast! BENCHMARKS In terms of quantifying Claude 3.5 Sonnet’s capabilities, I've discussed publicly many times how benchmarks aren't the most reliable indicator of capabilities because they can be gamed, but alongside my personal (qualitative) assessment of 3.5 Sonnet’s frontier capabilities, the model does set new benchmarks across: • The most oft-cited benchmark, MMLU, which assesses undergrad-level knowledge • GPQA, which assess graduate student-level reasoning • HumanEval assessment of coding proficiency MACHINE VISION In terms of machine vision, 3.5 Sonnet is about 10% better than Claude 3 Opus across vision benchmarks, performing particularly well at accurately transcribing text out of difficult-to-read photos. THE "ARTIFACTS" UI On top of all of the above, Anthropic released a new experimental UI feature that they’ve called Artifacts. When you have Artifacts enabled and you ask Claude to generate content (like code, documents or *even an interactive, functioning website*), these outputs appear in a side-by-side panel alongside your text-in/text-out conversation, so your conversation is on the left while the outputs are on the right. This is a game-changer because seeing these outputs on the side means that you don’t need to scroll up and down through your conversation to find the output. The "Super Data Science Podcast with Jon Krohn" is available on all major podcasting platforms and a video version is on YouTube. This is Episode #798! 📺 Check out the YouTube version for a demo of me rendering interactive websites (e.g., for playing the "Shell Game") in mind-blowing seconds. #superdatascience #ai #generativeai #llms #chatgpt #claude
585 Comments -
Karlha Arias
Have I mentioned that this computer scientist, developer, and straight-up genius is the brains behind yuda ? An MJC counselor once told Daisy she’d never make it in tech because of her math scores. But if you know us #Latinas, you know we don’t back down—we say, “Oh really? Watch me.” Not only did Daisy prove them wrong, but she’s also empowering so many women in the Central Valley and beyond to believe they belong in tech. Thanks to her leadership, our local Google Developers Group and Women Techmakers ambassadors have launched chapters in #Stockton led by Laiylaly Mandujano, #Turlock, Kelley Coelho, PMP® and #Merced Norma C. Cardona, MPA + Lourdes Ovando—making waves that others said were impossible. Daisy is always humble, but let’s be real: she deserves the credit. When I was fired from my alma mater, California State University, Stanislaus, for advocating for Latinas, Daisy stood by me. We were planning a tech event for International Women's Day, and despite the department committing $20k to continue without me, Daisy and Women Techmakers Modesto (Google Developer Group Modesto Partners) voted against moving forward without my voice at the table. That level of integrity and selflessness speaks volumes. This is why I am—and always will be—your ride or die co-founder, Daisy Mayorga-Fuentes. You believed in me when others didn’t, and together, we’re building the future of tech for Latinas and beyond. #LatinasInTech #WomenInTech #DigitalEquity #GoogleDevelopers #WomenTechmakers #CentralValleyTech #RepresentationMatters #Leadership #Integrity
14 -
Ben Guo
Generating JSON with LLMs is broadly applicable to many tasks. At Substrate, we believe it's the *glue* we need to do *less prompt engineering*, and *more software engineering*. JSON generation with LLMs makes data so much more malleable. Now we can easily extract data from unstructured sources, or even "shapeshift" structured data into any shape we want (Substrate code below). A lot of the new terms in AI engineering simply describe multi-step flows. And a lot of these flows can be reframed in terms of JSON generation. RAG? That's generating JSON for your vector DB query, searching, and then calling an LLM. (JSON generation can be useful on both the "Retrieval" and "Augmented Generation" sides). Function calling? Tool use? That's generating JSON for a function call, calling the function, and then calling an LLM. (Any multi-step LLM flow is a form of "Augmented Generation"). And there's a real benefit to reframing this way. JSON generation can improve reliability, and it's well established that LLM programs improve when multiple calls are chained together. You can also save on inference costs: (1) By using smaller models for each step. (2) By using smaller prompts (you can just define a schema, instead of coaxing with multi-shot examples) What's the catch? JSON generation and multi-step flows are known to be slow, and unreliable. That's where Substrate comes in. We've relentlessly optimized our JSON generation to make sure it's fast, and follows your schema with 100% accuracy. And Substrate's unique inference approach enables multi-step flows to run with maximum parallelism, and zero unnecessary data roundtrips. LLMs offer programmatic access to heuristic computation. Heuristic and symbolic computation are good for different things – and structured generation is the glue we need to combine them into a new kind of AI-integrated program.
231 Comment -
Gal Vered
Big LLM news this week – Meta released Llama 3, a 70 billion parameter open source model. There are a few reasons this is a big deal (including for Checksum). First, Llama 3 looks like a high performance model based on major benchmarks. But importantly, it's small enough that it can be fine-tuned and hosted at scale by companies and researchers. Even hobbyists and scrappy companies can probably run this model on a Macbook Pro by quantizing it. It's a major PR win for Meta and a big win for the space in general – and I'm expecting more good news to come out of Meta soon, too.
311 Comment -
David Talby
John Snow Labs #Healthcare #NLP & #LLM 5.3 is now out! This release completely overhauls entity resolution - mapping medical entities in text to standard medical ontologies, in context - greatly improving accuracy. Full release notes: https://lnkd.in/gsiGF26i Install software: https://lnkd.in/gsTqkWKT #ai #generativeai #healthcareai #healthai #datascience #ontologies #medicalcoding
391 Comment -
Jacob Singh
*Full disclosure: I'm an angel investor in Requestly* This is the feature I wanted as soon as I tried the product. I find most developers take too long to stage their setup and this is the number one productivity killer. TDD and all is great, but not very practical for a lot of projects and hard to practice when you have a complex set of dependencies (especially between front and back end). Being able to quickly record mocks and apply them makes front-end development *so much faster*. I think this is the way all front-end teams should work. If you do any JS development, go give requestly a try right now, it's free and open source.
293 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Alex Castillo in United States
-
Alex Castillo
SVP | An HR Cloud Company | SAP® SuccessFactors®, Qualtrics® EmployeeXM™ & SAP® Business Technology Platform Partner | SAP Pinnacle Award (2X) Winner
Melbourne, FL -
Alex Castillo
McHenry, IL -
Alex Castillo
Strategic Advisor | Board Member | Performing Artist
Los Angeles, CA -
Alex Castillo
Vice President Of Business Development
Los Angeles, CA
1121 others named Alex Castillo in United States are on LinkedIn
See others named Alex Castillo