Yeah, Kaylin is also pretty good at detecting lies. Kinda goes with her specialty.
15 Comments
I have to wonder, is he worried about Callum, or worried about Callum’s inventions?
Why can’t it be both?
He doesn’t give off the “only in it for what he can do for me” vibe. Could be wrong but Sage is really good at giving hints like that ahead of time.
I agree. Henry seems to genuinely care about Callum’s well being. I don’t think he has ulterior motives, but sometimes twists can happen and we only see the clues in retrospect.
I was a bit worried first but after everything I’ve seen so far I’m filing him under adoptive parent rather then sinister exploiter.
I believe trust worthy is one word and there is an errant hyphen after worthy
The Hyphen is correct as it suggests a pause in dialogue. Trustworthy is fixed- would have been done sooner but my internet was out almost all day today! Thanks!
I do believe that he see Callum as a son that he never had.
helicopter parent.!! much…
Parental Controls Personified?
What would a near future VR AI Parental control system be like 🤔, and then given XXX years of subjective time advancement.
Helicopter Parental Controls sounds about right.
Oh, I bet caretaking AI would at some point develop the logic “I must keep you safe, if I lock you in a stasis/in place inside a cube to be fed with a tube in your vein, you will always be safe. Don’t go anywhere, don’t do anything, don’t think anything. Thoughts can be harmful” and just lock us into beds/rooms with permanent paralysis because that is best way to keep an eye on us and keep us safe.
It is not a new concept. I think the first time I saw something like that is Jack Williamson’s https://en.wikipedia.org/wiki/With_Folded_Hands where robots take keeping humans safe to its logical conclusion.
That’s why the point of parenting isn’t to keep a child safe. The point is to raise them to be able to care for themselves and form healthy relationships with other people. This has the side effect of needing them to be safe while giving them the freedom to screw up, make bad choices, but hopefully teach them to avoid outcomes they are not prepared for or can’t come back from. When a kid tries to touch a red hot stove or an open flame you stop them, but if they are going to grab hot food with their bare hand – sometimes you let them so they can learn what hot is and that it hurts without doing lasting damage. Then when you tell them “Don’t touch the stove- it’s very hot” and they say “Like the chicken?” you say “Way worse”. Then- They don’t touch the freak’n stove and they learned without having to do it the hard way that could literally scar them for life!
Thus the logical conclusion of a robot probably wouldn’t be “I need to encase the child in carbonite” but might end up deciding it would be more effective to cause these mostly harmless accidents on purpose to teach lessons. Coming up with strange logical solutions with that in mind are a lot more fun than just “the robots are evil and want to turn us into batteries” – because that’s silly- we would make terrible batteries.
heh,, terrible batteries.. irl , 100% of the energy used keeps us alive.. you take away .01% and we die..
“oh,, i know we will use waste heat… ” in order for us to generate that amount of heat, would cook us.. think what happens when you have a fever…
i bet this is what happend to them all. ai is in statius plan,V2.0
15 Comments
I have to wonder, is he worried about Callum, or worried about Callum’s inventions?
Why can’t it be both?
He doesn’t give off the “only in it for what he can do for me” vibe. Could be wrong but Sage is really good at giving hints like that ahead of time.
I agree. Henry seems to genuinely care about Callum’s well being. I don’t think he has ulterior motives, but sometimes twists can happen and we only see the clues in retrospect.
I was a bit worried first but after everything I’ve seen so far I’m filing him under adoptive parent rather then sinister exploiter.
I believe trust worthy is one word and there is an errant hyphen after worthy
The Hyphen is correct as it suggests a pause in dialogue. Trustworthy is fixed- would have been done sooner but my internet was out almost all day today! Thanks!
I do believe that he see Callum as a son that he never had.
helicopter parent.!! much…
Parental Controls Personified?
What would a near future VR AI Parental control system be like 🤔, and then given XXX years of subjective time advancement.
Helicopter Parental Controls sounds about right.
Oh, I bet caretaking AI would at some point develop the logic “I must keep you safe, if I lock you in a stasis/in place inside a cube to be fed with a tube in your vein, you will always be safe. Don’t go anywhere, don’t do anything, don’t think anything. Thoughts can be harmful” and just lock us into beds/rooms with permanent paralysis because that is best way to keep an eye on us and keep us safe.
It is not a new concept. I think the first time I saw something like that is Jack Williamson’s https://en.wikipedia.org/wiki/With_Folded_Hands where robots take keeping humans safe to its logical conclusion.
That’s why the point of parenting isn’t to keep a child safe. The point is to raise them to be able to care for themselves and form healthy relationships with other people. This has the side effect of needing them to be safe while giving them the freedom to screw up, make bad choices, but hopefully teach them to avoid outcomes they are not prepared for or can’t come back from. When a kid tries to touch a red hot stove or an open flame you stop them, but if they are going to grab hot food with their bare hand – sometimes you let them so they can learn what hot is and that it hurts without doing lasting damage. Then when you tell them “Don’t touch the stove- it’s very hot” and they say “Like the chicken?” you say “Way worse”. Then- They don’t touch the freak’n stove and they learned without having to do it the hard way that could literally scar them for life!
Thus the logical conclusion of a robot probably wouldn’t be “I need to encase the child in carbonite” but might end up deciding it would be more effective to cause these mostly harmless accidents on purpose to teach lessons. Coming up with strange logical solutions with that in mind are a lot more fun than just “the robots are evil and want to turn us into batteries” – because that’s silly- we would make terrible batteries.
heh,, terrible batteries.. irl , 100% of the energy used keeps us alive.. you take away .01% and we die..
“oh,, i know we will use waste heat… ” in order for us to generate that amount of heat, would cook us.. think what happens when you have a fever…
i bet this is what happend to them all. ai is in statius plan,V2.0