More intelligent is a subjective term... some people define intelligence by the capacity to remember, in which case, in straight memeory there exists computer intelligence more intelligent than we are.
Other people define it as the ability to reason, whether inductive or deductive. And this is the area we would need to be worried about. The question isn't what happens when an artificial intelligence becomes more intelligent than we are (I would say a great deal many of the Earth's politicians fall in the category of less intelligent than most AI in the modern day), but rather the question is what happens when the AI is aware that it is more intelligent. Self awareness, though not necessary for intelligence, is necessary for other things: perspectives on reality, morality, etc...
A computer mind which becomes more intelligent than a human may not know that it exists, that we exist, or the world exist, it may have no awareness other than input or output, 'caring'(in a very figurative sense, and myself not knowing a better word to explain it) not where it comes from or where it goes. We assume that creatures at least as intelligent as us must be sentient, because every time we see creatures at least as intelligent as us they are. But the case is, the only creatures as sentient as us is other humans.
But let's assume in 20 years we create a sentient computer at least or more intelligent than a human (besides a politician, which generally are not humans but rather simple organic machines which truly make me wonder how someone with out a central nervous system can make it through a full life time). The only real problem is whether or not morality could be programmed in and a skepticism could be programmed out.
In the case of morality it is important because we would need to know if the computer would recognize our sentience. That is: we recognize other people are sentient because they are the same species, and we would treat them as we would like to be treated (Lao tzu said it first, Jesus, and by extension God, just copied him). Computers look nothing like us (unless if given a human form, yay Asimov!), so how would a computer look at us an figure we are sentient? The quick answer is that they necessarily, on their own, would not. The fix would be to programme it in, give it say... three principles which it cannot break. This creation would be the creation of non-objective objective principles, they exist but only inside of the machine, thus making them physical principles and not underlying principles of the Universe. The question then to be asked from our moral stand point is would it be right for us to force moral ideas upon another creature in order for it to better serve us? Would that not be slavery?
As for skepticism, I do not think it could be programmed out with out removing a great deal of intelligence and reasoning from the machine, thus making it essentially dumb. A smart machine would recognize that any sensory information may not accurately represent the world, or infact there may be no 'world' as we would call it. To steal a bit from the First Meditation, how infact would the machine be able to tell the difference between the 'real' world and a world where the information were given to him by a malicious programmer? We all know the sense can be fooled, just take mushrooms. So how difficult would it be to fool the senses of a being which we created? Not at all. A intelligent computer would break down into solipsism and be stuck and an infinite loop a sentient computer may disregard it and file it under the assumption that it must be thought this way to make any progress. Maybe one day how sick humans see a physician, a sick machine will see a metaphysician to help it get through a litteral identity crisis.
I honestly do not think it would be a problem for sentient computers more intelligent than us. If their neural net and our neural nets would be rough copies of each other, then perhaps the way to deal with sentient machines is not to think of the seperately from humans. That is like a human a sentient machine would have a childhood, adolescents and adulthood, and like a human much of its personality could be derived from whether any part of its development was particularly traumatic. Or think of it this way, the final step of programming would be character development. It does not have to be Spielbergesque, but that is a possbility. Chances are, though, if an AI were to be developed, the first employment would be in military arenas, so any chance to create a 'good' AI would be destroyed, think more along the lines of Terminator or War Games (as most people already do).
At some point along the line a sentient computer was thought to always be a member of the Church of Ayn Rand. How did this happen? Problem in some movie and it sold, so others trying to capitalize on this situation maid similar moves, thus cementing it in our social consciousness. Can a sentient computer turn out to be like in the Matrix? Yes. Is it possible for other outcomes? Yes.
To say that research and technological development should be curtailed because of possile outcomes should not be sufficient reasoning to stop it. If we were to stop any potentially destructive technology because of its potential, society as we know it would have never existed. Now that may or may not be a good thing, but if we go back a few hundred thousand years to the first cave man who sharpened a stick, and someone said 'do not do that because it will cause millions of deaths' and he listens, history turns out quite different.
I myself believe in evolution. Sentient machines bring us the opportunity to raise our being from Homo Sapien to Homo Superior. A sentient machine requires us to better ourselves and not be made obsolete.