Nvidia's relentless development cycle has resulted in enormous profits for the company, which brought in $41.1 billion in data center sales in its most recent quarter.
Jamba (hybrid transformers/space state) is a killer model folks are sleeping on. It’s actually coherent at long context, fast, has good world knowledge, even/grounded, and is good at RAG. Its like a straight up better Cohere model IMO, and a no brainer to try for many long context calls.
TBH I didn’t try Falcon H1 much when it seemed to break at long context for me. I think most folks (at least publicly) are sleeping on hybrid SSMs because support in llama.cpp is janky at best, hence they’re not getting any word-of-mouth. For instance, context caching does not work. And Jamba’s janky commercial licensing (unless you pay them) does not help.
…Not sure about others, toy models aside. There really aren’t too many to try.
…TBH, Deepseek is the only non-bog-standard transformers grouped-query-attention model folks have mostly played with. The big trainers seem to be risk-averse architecture wise (hence no big bitnet model attempts yet), which is what Nvidia is betting on I guess.
Jamba (hybrid transformers/space state) is a killer model folks are sleeping on. It’s actually coherent at long context, fast, has good world knowledge, even/grounded, and is good at RAG. Its like a straight up better Cohere model IMO, and a no brainer to try for many long context calls.
TBH I didn’t try Falcon H1 much when it seemed to break at long context for me. I think most folks (at least publicly) are sleeping on hybrid SSMs because support in llama.cpp is janky at best, hence they’re not getting any word-of-mouth. For instance, context caching does not work. And Jamba’s janky commercial licensing (unless you pay them) does not help.
…Not sure about others, toy models aside. There really aren’t too many to try.
…TBH, Deepseek is the only non-bog-standard transformers grouped-query-attention model folks have mostly played with. The big trainers seem to be risk-averse architecture wise (hence no big bitnet model attempts yet), which is what Nvidia is betting on I guess.
ty, appreciate this