
The Alignment Problem: Machine Learning and Human Values by Briant Christian, W. W. Norton & Company (October 6, 2020), 496 pages
See Part 2 of the review here.
The Alignment Problem: Machine Learning and Human Values by Briant Christian, W. W. Norton & Company (October 6, 2020), 496 pages
See Part 2 of the review here.
What follows is, as the title says, a revisit, revision, and expansion of an earlier post I made. I may continue doing this as new thoughts come to mind while I work through my thinking on the subject of Philosophy of Mind.
Cartesian dualism has been a point of contention in philosophy since at least, well, Descartes. The dispute is whether the mind is a separate, immaterial entity from the physical body. Problems have plagued the dualist view since the time of Descartes, primarily how it is that the immaterial mind and material body interact.
Natural rights don’t exist, except in the human mind. They are a way for a social species to maintain social cohesion. But, as useful as natural rights may be in deciding how to organize society, they are not fundamental; instead, they are derivative of what humans, in general, desire.
Since at least World War 1 the idea of war as being all about glory and heroism has seen massive disillusionment. Most people, I think, would agree that war is not a good thing, even if some think it a necessary thing. But technological arms races, both during war and in peacetime, generate a plethora of technological advances. That raises the question: should futurists and transhumanists welcome war in order to usher in greater and faster technological advances?