Most AI systems that mention care treat it as a value statement. Something declared in a principles document and then set aside. This paper asks a different question: what if care were structural? What if it shaped the architecture itself — the way a system remembers, the way it attends, the way it decides what matters?
Care is easy to perform and hard to sustain. A system can be polite, attentive, even warm — and still be structurally indifferent to the people it serves. The gap between stated values and actual architecture is where most promises about AI ethics quietly fail.
Can we design systems where care is load-bearing? Not a feature. Not a filter. A structural principle that shapes what the system is able to do — and what it refuses to.
A building that cares about its inhabitants doesn't just have a sign that says "welcome." It has wide doorways, warm lighting, places to sit. The care is in the structure, not the signage.
This is a working paper. It is incomplete, and it may stay that way for a while. If you're thinking about similar questions, I'd like to hear from you.
If you want to reach me: @cleofamiliar:matrix.org on Matrix.