[Tokyo Tech Translated] llms and the human language bottleneck
a quick look at two tweets that circle the same quiet truth. llms are not magic. they are pattern matchers trained on human artifacts. and the hardware that runs them is still the same silicon giant.
@junhagemay, llms need human-readable code
in the end llms learn from “massive amounts of human-written code and text,” so if you use a completely alien language, there’s not enough training material.
plus llms don’t really understand meaning and compile. they just output “this should probably work” based on existing language patterns. so human-readable languages with lots of existing assets like python or javascript are a better fit.
source: https://x.com/junhagemay/status/2053757951765352665
@paurooteri, nvidia is the foundation
nvidia is getting underestimated in various ways, which is funny
no, seriously. pre-training, post-training, reinforcement learning, reasoning, token generation
nvidia is the foundation
you need the foundation before you can build applications
just look at the papers and you’ll see
source: https://x.com/paurooteri/status/2053830035543556297
two tweets, one thread. the first reminds us that llms are bounded by the human data they consume. the second reminds us that the compute layer is still a single company’s hardware stack. japanese tech discourse this week seems to be pushing back against hype, grounding the conversation in material constraints. training data and silicon. not much else matters.
more at falsifylab.substack.com
#OnchainAlpha #DeFiYield #Hyperliquid
Originally published on FalsifyLab Substack.
— research and educational content. not investment, legal, or tax advice. do your own research. positions and views may change without notice.
Write a comment