AI Isn't the Point
My 2025 Reflection

I’m in my third year as a software engineer with the last two focused exclusively on artificial intelligence. I work from home on Hawaiʻi Island as a government contractor designing systems that use AI/ML to process information and make human tasks more efficient. As the year closes, I find myself at the moment reflecting less about what AI can do and more about who’s building it and why.
A few years ago, I started posting online about what I was learning, what I was building, and what I thought was changing. I had no audience and no particular strategy. I just wanted to think out loud and see who was thinking about the same things.
That decision opened doors for me.
VS Code + GitHub Insider Summit
In September, I was invited to the VS Code + GitHub Insider Summit in Redmond, Washington. I spent a few days in conversation with people whose blogs I’d been reading for years, whose tools I use daily. There’s still something magical about meeting someone you’ve prior only known through their work. Technology is made by people, and understanding the people changes how you understand the work.
The purpose of the summit was to discuss the future of VS Code and GitHub Copilot, what’s working, what isn’t, and where things should go. The Microsoft and GitHub teams were generous with their time and genuinely curious about what practitioners are experiencing. The level of care was evident, and that’s not something you can fake in a room that small.
Aotearoa
Then in November, I traveled with my hālau to Aotearoa. The trip was about cultural exchange, connecting our hālau with Māori communities, visiting marae, and honoring the ties between Hawaiʻi and Aotearoa.
The first leg of our journey began at WIPCE, the World Indigenous Peoples’ Conference on Education. Among the sessions I attended, several focused specifically on AI, and I found myself thinking about AI in ways I hadn’t anticipated. The talks all seem to converge on the same theme: own your data, own your infrastructure, own your future.
Te Hiku Media presented their work building a Māori speech recognition model. They’d started as a radio station in 1990, preserving recordings of native speakers, and eventually realized they were sitting on something invaluable. They had training data. Rather than hand it to Google or Apple, they built their own tools. Their ASR model now transcribes te reo Māori with 92% accuracy, trained on recordings their community contributed and still controls.
They spoke about land and data being tied to one another, and the two being inseparable. Both are inherited, both require stewardship, both can be extracted by outsiders if you’re not careful.
Dr. Lars Ailo Bongo, a Sámi researcher from Norway, put it plainly: “AI is so important our biggest worry is to be left behind.” His team used LoRA to train open weight models that accurately represent Gákti, traditional Sámi clothing. The results came from one researcher working for ten days with twenty images. Imagine, he said, what full research funding could do.
I’d arrived with a vague sense that AI might help indigenous communities. Something about land stewardship, environmental monitoring, language preservation. I left with real examples of people already doing it, and a framework for thinking about why it matters. This is what open source is actually for. For communities that have had things taken from them, it’s a way to build without asking permission.
Since coming home, I’ve been thinking about how often we interact with our phones. Siri still mangles Hawaiian place names. It can’t speak ʻōlelo Hawaiʻi. Te Hiku Media poured years into solving this for te reo Māori, and now they are partnering with UH Hilo on Lauleo, a project to build the first speech-to-text model for Hawaiian.
China
This year, China became the dominant force in open weight models, and the story is inseparable from one person. Liang Wenfeng started as a quant trader. He co-founded High-Flyer, a hedge fund that grew to manage over 100 billion yuan. In 2023, he pivoted to AI and founded DeepSeek. Unlike most lab founders chasing commercial applications, Liang wanted to do foundational research and give it away.
In late December 2024, DeepSeek released V3, a 671 billion parameter model that matched GPT-4o and Claude 3.5 Sonnet on benchmarks. The reported training cost was $5.6 million. Then on January 20, they released R1, a reasoning model that matched OpenAI’s o1 across math, code, and reasoning tasks. The release temporarily cratered Nvidia’s stock price.
In September, the R1 paper was published in Nature, and it was the first large language model to pass peer review in a major scientific journal. Nature made it the cover story. The paper disclosed a training cost of just $294,000 for the reasoning capabilities and addressed the distillation controversy directly: R1 did not learn by copying reasoning examples from OpenAI models. Eight external experts reviewed the work. Nature’s editors called it “a welcome step toward transparency and reproducibility.”
What really struck me though was the philosophy behind it. Liang has said that open sourcing doesn’t mean losing anything, and that for technical people, being followed and built upon provides a sense of accomplishment. “Giving is an extra honor,” he stated in an interview. “A company doing this has cultural attractiveness.” He believes China can’t remain a follower forever, that the real gap isn’t one or two years but the gap between originality and imitation.
His team is almost entirely young graduates from Chinese universities. No mysterious overseas talent. Just people who like solving hard problems, given the resources to try. And I believe there are several labs in existence like this in China today.
The uncomfortable tension is obvious. The same country producing researchers who share their work freely is also the country building mass surveillance infrastructure. The same tools, different hands, different outcomes. I don’t think this contradiction resolves. Individual researchers solving problems they find interesting exist alongside state apparatus that might use those solutions for control. Both are true.
Technology doesn’t have values. People do. And right now, some of the people pushing open models forward are doing it because they believe knowledge should be shared.





