Feb. 8, 2026

From Haiti to Edge AI: Building Privacy-First Learning Tools with Sebastien Fenelon

From Haiti to Edge AI: Building Privacy-First Learning Tools with Sebastien Fenelon

Education changes when resilience meets practical technology, and that is the thread running through this conversation with founder and technologist Sebastian Fennelyon. He grew up in Haiti, where electricity and internet access were scarce, yet carved a path into coding by leaning on persistence, curiosity, and community. That origin story offers a sharp reminder for educators: the most valuable resource is not hardware but mindset. When learners see a path from idea to outcome, they keep going even when the lights go out—literally in Sebastian’s case. The episode explores how that grit translates into classrooms today, where students in K–12 and higher education can access tools and tutorials that once cost thousands. With the right framing, even basic HTML becomes a gateway: give context, then let students build small, visible wins. The outcome is not just code; it is confidence, and confidence scales.

Coding is the bridge between imagination and software, and AI is the engine that compresses timelines. What used to take six months can be prototyped in a week when language models become collaborators rather than crutches. That acceleration creates a seductive trap: generic, prompt-built products that look the same because the outputs are trained on broad patterns. Sebastian argues that the antidote is fluency—understanding the stack, the languages, and the architectural choices—so students and educators can shape systems, not just prompt them. He recommends a “100-hour” sprint for any new language: choose a clear goal, follow a structured path of documentation and tutorials, and push through discomfort. Once you can read and write the basics, AI becomes leverage, not a substitute. Educators can model this approach by having students build targeted projects, annotate trade-offs, and reflect on what the AI got wrong and why.

The promise of AI in education is personalization at scale. For decades, schooling has been standardized because it had to be; one teacher could not custom-tailor instruction for thirty learners in real time. With AI, it becomes possible to track understanding by concept and chapter, and then adjust supports for each student. Sebastian highlights edge AI as a pivotal shift: smaller on-device models can power lesson planning assistance, formative feedback, and classroom analytics without sending sensitive data to the cloud. Teachers get actionable insight—who needs a simpler analogy, who is ready for stretch problems, who stalls on transfer tasks—while students receive timely nudges and exercises matched to their current level. The key is analytics on understanding rather than only grades: a map of conceptual grasp and misconceptions that can guide instruction without labeling students.

Privacy is not a side note; it is the foundation that earns trust. Sebastian outlines how today’s default often treats user data as payment: you provide prompts and documents, and the model provides an answer while retaining signals that might surface elsewhere. Add the realities of prompt injection and inadvertent sharing of sensitive information, and the risk escalates for schools and universities bound by policy and law. The EU AI Act raises the bar by requiring transparency and data protection, and that standard is pushing the ecosystem to mature. Practical measures include tokenization and sanitization before data ever touches a model, strict control paths that prevent personal identifiers from being transmitted, and local or edge models that run offline when possible. The goal is simple: deliver the same power without leaking the person.

Educators are already building bespoke chatbots tuned to a course, a unit, or a skill progression, and that can work when the system is designed with privacy-first principles. Sebastian contrasts closed cloud systems with truly local models. A closed model inside a vendor’s cloud reduces exposure but still traverses networks and infrastructure you do not control; local models keep computations on the device, eliminating routes for third-party access and making prompt injection far harder. For schools, that can mean combining a minimal local model for sensitive tasks with selective use of larger external models for non-sensitive research, always through a sanitizing gateway. This hybrid approach preserves capability while staying compliant and transparent, which builds confidence with faculty, students, and families.

Staying future-ready is less about predicting tools and more about protecting habits. Keep learning in focused cycles, join or form communities where practitioners share patterns and pitfalls, and cultivate mentors who offer more than tutorials—people who reveal the human path behind the skills. 

🔗 Website and Social Links:

Please visit Sebastien Fenelon’s website and social media links below.

Sebastien Fenelon’s Website

Sebastien’s Facebook Page

Sebastien’s Instagram Page

Sebastien’s LinkedIn Page

📢 Call-to-Action: If you’d like to dive deeper, be sure to visit the InthraOS homepage. There, you’ll find resources, guides, and a newsletter that will keep you ahead of the curve on AI and privacy-first technology. It’s a great way to gain practical insights, connect with a growing community, and explore products you can start using right away.

Photo by Jakub Zerdzicki: https://www.pexels.com/photo/webcam-security-camera-smart-home-monitoring-equipment-on-home-office-table-28117882/