Thursday, September 26, 2024

PDX Python

 


Our first presentation, by Rey Abolofia, was on how to speed up Amazon Lambda processing by sending it compiled Python (pyc) instead of source (py). The time-to-load on a large package, such as numpy or matplotlib, may be drastically decreased (by some 45% in the demo data). 

The Lambda service only charges when a lambda is processing, based on memory and duration (size and time). 

Rob Bednark presented about CHOP, Chat Oriented Programming, which is what a lot of geeks are experimenting with this days, for a low entry cost of about $10 per month. CHOP only became a viable reality this year. 

Through a process of refining prompts, a generative LLM may be coaxed into doing a lot of the grunt work around programming. It's like having an apprentice, or, if you're new to the ecosystem, a mentor. I predict AI will free a lot of grad students from slaving for their supervising faculty quite as much.

During the discussion, I mentioned experiments with AI performed by Daniel vis-a-vis Quadrays and ivm-xyz conversion. Daniel fed my Python repo to Perplexity, asking for a clearer more documented version of the code. The results were not up to par, but helped motivate me to improve my original.

I came with Dr. DiNucci, a computer scientist who observes Python culture from a distance. His area of expertise is parallel and concurrent processing, around which he has been designing an orchestration language named Scalpel.