BondFoundry's picture
Open to Collab
2

BondFoundry

BondFoundry
·

AI & ML interests

We build and publish premium synthetic datasets for fine-tuning large language models on institutional-grade financial domain knowledge. Specialising in quantitative finance, systematic trading, market microstructure, derivatives pricing, and algorithmic execution research. Each dataset is generated via multi-agent AI simulation and direct Claude generation pipelines producing instruction-tuning triplets (system/instruction/response) that are technically precise, domain-specific, and MIT licensed for commercial use. Current catalogue: Quantitative Research Instruct Dataset v1.0 — factor models, HFT, derivatives, execution, risk, stat arb, ML quant, macro, crypto Cybersecurity Research Dataset — coming Week 2 Clinical Research Dataset — coming Week 3 Who uses BondFoundry datasets: AI labs fine-tuning financial LLMs, hedge funds building proprietary trading assistants, fintech companies training domain-specific models, academic researchers studying synthetic financial data. Enterprise custom orders: info@bondfoundry.com Website: bondfoundry.com X: @BondFoundry New datasets released daily. Growing catalogue of premium niche AI training data.

Recent Activity

reacted to salma-remyx's post with 🔥 5 days ago
SciCrafter measured something AI practitioners have intuited: frontier agents are improving at executing inside well-framed problems, but lag at framing the problem in the first place. GPT-5.2, Gemini-3-Pro, and Claude Opus 4.5 all plateaued near 26% on a new Minecraft benchmark for probing AI capabilities in the discovery-to-application loop. So the authors ran targeted interventions: * Hints about what to investigate doubled performance. * A structured experimentation template added 7-14 more points. * Structured consolidation beat free-form summaries by 6 points. * Curriculum context beat independent task-solving. These interventions helped the agent frame what’s worth investigating, and structure what gets learned so it compounds. The bottleneck for AI in scientific workflows is upstream of execution. Their findings are congruent with the design patterns we've adopted at Remyx AI to help AI teams close the development loop scientifically. Agents work well inside structured loops, but they perform poorly when tasked with creating the structure. Instrumenting your scientific workflows offers greater leverage than scaling compute with a less informed search. In the work of building production AI systems, teams are flying through execution. The bigger challenge is identifying which experiments moved which production outcome, or what to try next. One of the more interesting results I found this week by tracking work in AI for scientific workflows using Remyx: https://engine.remyx.ai/papers/d8f23b9b-b14b-4ada-b44e-ccfc221c06b4
liked a dataset 13 days ago
BondFoundry/bondfoundry-quant-sample
updated a dataset 16 days ago
BondFoundry/bondfoundry-legal-sample
View all activity

Organizations

BondFoundry's profile picture