A research practice on the limits of artificial creativity.
A semantic collision engine for non-trivial idea generation, grounded in Koestler's bisociation theory of human creativity (1964).
| Built for | Researchers, ideation teams, R&D departments. Any domain where one non-trivial idea outweighs the cost of compute. |
| Evidence | Benchmarked across 12 projects with a blind LLM-judge protocol. See the launch study ↓ |
Language models converge to a narrow region of idea space when given the same brief, a phenomenon researchers call the Artificial Hivemind. We tested whether distant-domain collisions actually escape it.
Jiang, L. et al. (2025). Artificial Hivemind: The Open-Ended Homogeneity of Language Models. arXiv:2510.22954 · Lion, C. (2026). Why Direct Prompting Pushes LLMs Toward Trivial Ideas. Oparine Working Paper 01.
We deploy our generation methods inside companies whose work depends on outputs the default LLM cloud can't reach. The engine is calibrated to your problem, your use cases, your brand.
When the words have to read like the brand wrote them. We tune the engine on what you say, what you'd never say, and what only you can mean.
When a brief has more upside in being right alone than right with everybody else. We bring distance from structurally remote domains, not louder prompting.
When in-house generation has stopped surfacing fresh angles and every iteration lands in the same neighborhood. We rebuild the search, not the prompt.
We rebuild the search, not the prompt.