Posts

Showing posts from February, 2023

Using Logic Gates as Neurons - Deep Differentiable Logic Gate Networks!

Image

32Bits RAM in Digital Logic Sim #digitallogic

Image

Joshua E. Jodesty - CATs: Content-Addressable Transformers | PyData NYC ...

Image

Vectoring Words (Word Embeddings) - Computerphile

Image

Learn from External Memory, not just Weights: Large-Scale Retrieval for ...

Image

How ChatGPT works - From Transformers to Reinforcement Learning with Hum...

Image

Art of doing disruptive research

Image

An Introduction to Graph Neural Networks: Models and Applications

Image

Why are neural networks so effective?

Image

Attention - the beating heart of ChatGPT: Transformers & NLP 4

Image

Palantir CEO Alex Karp on Responsible AI in Warfare | REAIM 2023

Image

GOA Just Leaked ATF’s Zero Tolerance Manual

Image

Youtube AI business spam offensive

Image

Smart Systems - AI Opportunity Analysis Framework

Image

Pledger Results Overview

Image

Deep Cellular Insights Start Here

Image

Why are neural networks so effective?

Image

Attention - the beating heart of ChatGPT: Transformers & NLP 4

Image

Changes In the Academic World - Jocko Willink & Peter Maguire

Image

Zardoz, Zed discovers the true nature of the Tabernacle

Image

5D Quartz ETERNAL Storage 💿 How it Works

Image

CUDA compat

Image

What’s new in TensorFlow 2.11

Image

15 futuristic databases you’ve never heard of

Image

OpenAssistant - ChatGPT's Open Alternative (We need your help!)

Image

Data Structures

Image

The Transformer Family Version 2.0 January 27, 2023 · 45 min · Lilian Weng

https://lilianweng.github.io/posts/2023-01-27-the-transformer-family-v2/

An Introduction to Vector Symbolic Architectures (VSA) and Hyperdimensio...

Image

Deep Learning at Scale with Horovod feat. Travis Addair | Stanford MLSys...

Image

Scaling with the Ring Allreduce Algorithm

Image

Distributed and Decentralized Learning - Ce Zhang | Stanford MLSys #68

Image