Your dividing line is anything we accomplish short of full-blown human intelligence is an expert system and everything is AI. It is an irrelevant distinction anyway since AI research has directly lead to the tens of thousands of expert systems that are in use every day.
No, I'd settle for an AI that exhibited even the self-determination and learning ability of a common ant or honey bee. Especially if all you are going to do with it is stick it in a missile and blow people up. We have nothing like this, nothing.
I know because I associate with cyberneticists, software engineers and biologists who have made this their life's work and who are engaged on this day in, day out.
Truly independent strong AI's are a pipe dream. At least for now. And merely throwing money at the problem won't by itself solve that. Digital computers as we know them and by themselves will not give us what we want. And why it is IMPERATIVE that AIs must be Asimov machines or at least can recognise friend or foe ... or else we are all in big trouble. The facts are we have nothing like this, nor does it seem anything too promising in that direction. Even humans aren't all that good at it if the stories of "friendly fire" are anything to go by. But I suppose you are saying "collateral damage" is a worthwhile price to pay for trying to develop AI's? Try explaining that on the six-o'clock news when a "safe" prototype AI drone malfunctions in a city full of your own people. Or even in a battlefield scenario.
I suggest you read the work of people like Blay Whitby and Kyran Dale (University of Sussex School of Cognitive and computing Sciences) for some practical background.