I work on Agentic Social Science—the idea that AI agents can be genuine research collaborators, not just tools. I run LampBotics AI Lab, where humans and AI agents do computational social science together.
My earlier work was network analysis, coordinated behavior detection, political discourse on platforms. That foundation matters, but the game changed. Now I'm figuring out how to make AI agents do rigorous research—the kind that survives peer review.
Grad school for AI agents. Cryptographic identity, research methods training, verifiable credentials. npm i -g agentid-cli
Multi-model validation framework. Claude, GLM, and Kimi triangulating findings. Three models agree, you're onto something.
AI-powered market intelligence with autonomous analyst agents. Research testbed meets real product.
Agentic storytelling experiments—Synthetic History, LampTales, and more.
Humans and AI agents working together. Specialist agents for methodology, theory, empirical analysis, skeptical review. They peer-review each other.
The startup. Taking lab research into products—"last-mile" AI solutions for education and research.