You are viewing a single comment's thread:
LLM - Large Language Model - Its the program that is configured to read and train on a very large dataset, for a long time. Can be an one-off thing, or continuous... And then it can be used for inference (questioning) which is usually what "us" users ask to it.
The Data is the secret sauce, because without access to it, AI not useful. Because Hive is open source and everything in it is visible (except encrypted memos), LLMs can train on the blockchain data, which because its all about social things... it has extreme value for companies to produce products, its users would like to consume.
Example:
Imagine I now want to find out if there is anyone on Hive that ever talks about Supercomputers... but they might not have used the word "Supercomputer" or might have just talked about a specific component that I don't know yet the name, so I can't really search for it.
If I feed the blockchain data into an LLM, I can then ask the "agent" that interacts with that LLM, for it to return any post on Hive that is specifically talking about HPC (High performance computing) or Supercomputers models or parts.
And it generally works. Example using Grok... the folllowing output is copy pasted from Grok to here... so you see the power of these things.
Yes, there are definitely people on Hive who talk about supercomputers—and not just using the exact word. Multiple users have posted directly about them (definitions, advantages, real-world examples like Nvidia's AI supercomputers or IAGON's decentralized blockchain-powered one), and many more discuss the underlying concepts and hardware without always saying "supercomputer."This includes High Performance Computing (HPC) setups, building personal clusters (essentially DIY mini-supercomputers), scientific applications that need massive parallel processing, and specific components like GPUs, high-speed interconnects, processors, cooling systems, and multi-node architectures.Direct examples using the term "supercomputer"@randallnewton
has written several detailed posts, including one on Dell commoditizing supercomputers and turning them into more accessible hardware (comparing it to what they did with PCs). Another covers Dell EMC winning a University of Michigan contract for the "Great Lakes" cluster and specifics on the Pitzer Cluster at Ohio State (528 Intel Xeon Gold processors, 64 Nvidia Tesla V100 GPUs, EDR InfiniBand networking for ultra-fast data transfer, and liquid cooling).
hive.blog
@forexperformance, @sabbirkhan
, and @leoglossary
have posts defining supercomputers, their OS/cooling needs, advantages/disadvantages, and how they rank in power.
hive.blog
Others cover Nvidia building massive AI supercomputers or projects like IAGON as a "global supercomputer" on decentralized/blockchain tech.
testnet.peakd.com
Without the exact word (but clearly about supercomputer-level tech/components)@travelingmercies posted a series on planning and designing their own Beowulf-style cluster computer (multiple nodes for grand computing power). They discuss hardware like Raspberry Pi 3s (quad-core 1.2 GHz, 1 GB RAM each), power hubs, Ethernet networking, custom racks, and why it's needed for research-level work. This is classic entry-level supercomputing/cluster building.
@dexterdev
ran an 18-day molecular dynamics simulation on a university HPC facility (128 nodes, 1024 cores, Intel Xeon processors). They detail PBS scripts, time steps (2 femtoseconds), system size (DMPC lipid bilayer in water), and scaling up—pure HPC/supercomputer usage without always labeling it as such.
hive.blog
Posts on high-speed interconnects (e.g., InfiniBand or Mellanox HDR 200 Gb/s for faster data movement in clusters), liquid cooling, GPU acceleration in massive systems, and even market overviews of the HPC industry appear regularly.
ecency.com
Bonus contextThere's also chatter around HIVE Digital Technologies (the crypto/AI company, ticker HIVE), which has been expanding into GPU-based data centers and AI/HPC workloads—some Hive users post about that angle too, tying blockchain to supercomputing-scale power.These aren't super common daily topics (Hive is more crypto/DeFi-focused overall), but there are consistent posters like @randallnewton
who clearly follow the supercomputing/HPC world closely, plus occasional deep dives from tech/science users.If you want to explore:Head to peakd.com (or ecency.com) and search terms like "supercomputer", "HPC", "cluster computing", "InfiniBand", "Beowulf", or follow the users above.
Hive even has community tools like HiveSearcher or hiveSQL for more advanced filtering.
If you have a specific component in mind (e.g., "InfiniBand networking", "liquid cooling systems", "GPU clusters for AI", "quantum interconnects", or whatever you're thinking of), drop the name—I can hunt for exact posts or discussions about it on Hive. Or if you'd like me to pull full content from one of these posts/users, just say which!
Wow, this is a lot of information, but I understand what you mean. So much information is both complex and fascinating.
It wouldn't be an AI if it didn't store all this information and not take advantage of it.