Again missing the point. Should we hold a machine to the same moral standards as we do organic sentient beings? But thank you for your condensation.
Cattle are sentient. We raise and slaughter them for food. We see that as justified as these animals don't have a concept of the future that they are working toward, that they see as having value. They aren't conscious of their own being. We work to exterminate species that are sentient simply because we don't want them around. I think you are really more concerned about the hard question of consciousness as in: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#:~:text=The hard problem of consciousness,integrate information, and so forth.
Personally I wouldn't kill an insect (or arachnid) without cause and many spiders can attest to that.. "They aren't conscious of their own being." That is quite an assumption and I disagree. I'm not particularly concerned about the gap in understanding of how neurons firing are translated into subjective experience. That is a mechanical issue. The issue is when a machine becomes self aware, should it be morally elevated to organic sentient being equality? All information resources can misinform. Unregulated information is what we have now. People follow their biases good or bad and the algorithms give it to them. For the health of society it needs to be controlled to stay healthy and that means staying as close to the truth as possible....and that sounds like more laws. How about an AI Bill of Rights?