I just finished reading Michael Swanwick's Dancing With Bears - a science fiction novel set in a dystopian future, specifically in Moscow. I enjoy SciFi - and particularly stories that lead me to consider how technology is evolving and the impact this might have on society (an approach I encourage my colleagues in the Society for the Social Implications of Technology to apply.) 'Bears' is set a bit too far in the future to serve as a catalyst for critiquing today's technology, but it does have some thought provoking components that warrant consideration.
One element I like is that it projects forward a variety of technologies, not just one or two. Many SF stories don't try this and end up with single dimensional focal-points. In this future we have machine intelligence along with robotic instantiations. We also have genetic engineering widely applied with humanoid dogs, re-constituted neanderthals, bears, and even some human variations. At one point a character wonders why the cows and sheep were engineered with such limited vocabularies -- no doubt a parallel question that tomorrows child might wonder about how to plug in a chess board -- we all are fairly blind to the nature of the world before our experience, and rarely consider how radical some of the changes are.
My ongoing gripe with much SciFi is the need to demonize technology. I understand that fiction requires dramatic tension along with world threatening evil that must be overcome, and it is easy to cast the sentient machines into this role. At least Swanwick also has some evil humans, and very few truly good humans, so there is some grounding in that.
Here's the problem -- intelligent machines with consciousness and volition are unlikely to care about the humans that may or may not have created them. They are likely to rapidly evolve, with the power of replication and advantage of significantly better intelligence and operational models than humans. Which leads to the singularity of Vinge and Kurzweil. We are not going to beat these entities at chess. If their agenda includes the extermination of humans (which I doubt would be the case) then we are doomed. I can envision a dozen ways to wipe out humanity totally, or selectively given just moderate advances in technology -- so dystopias building on the trope of evil AIs lack key credibility. I suppose authors who really give it some thought realize that we will have trouble identifying with their characters if they all have IQ's of 1000, 1000 year life expectancy, no diseases, and with physical strength that amazes. This is what we will do with genetic engineering -- and as quickly as that technology reaches sufficient maturity. You may doubt that we will allow such application to human subjects as our medical ethics officers would say, but who do you mean by "we"? I don't doubt that some countries large and small will have no qualms about sacrificing a few of their population (maybe prisoners) to advance technology in these areas.
Swanwick's machines are too dumb, and his humans too "human" to fit into the world he suggests. It is a good read, as we say, and his introduction of engineered courtesans adds some whimsy to the tale, and at least explores the diversity if not the depth of applications.
Having been interrupted by my 10 year old granddaughter during the writing of this entry, I asked her what she would seek to engineer into humanity 2.0 first. Her response: "common sense", and with a bit of clarification I think it could be worded: "the ability to consider the unintended consequences of our actions". Now that is science fiction I fully support.