Autonomous systems are autonomous because they respond to changing circumstances, and the interesting case is when those circumstances are not predictable.   Defining the problem before-hand - as is required by conventional software engineering - misses the point.   Real autonomy requires the system to deal with the unexpected and pre-specifying the problem makes the solution trivial. Existing UXV design is, in AI terms, still in the "blocks world". The SFPV under development

Getting something to fly at all under the control of a computer is however it's own challenge.   This is the SFPV v0.1 that was designed to use a SunSpot as the on board controller and wireless link to a laptop rather than classic radio control.   It ended up with contra rotating blades and a few extra pieces, but really radio control helicopters have the air frame and control sorted.   That counter-weight on the top rotor makes them amazingly stable.   If only we could figure out how to interface them to a computer...

So, assuming we can get a computer to talk to a UAV, can we create autonomous systems that respond appropriately when the appropriate response has not been programed in?   Some (such as my old boss at Agent Oriented Software) think we can, and some think a human is required in the loop somewhere. This paper described a working simulation of a BDI based "automated wingman" that took high level instructions and applied common-sense to them.
@inproceedings{jackUav,
author="Peter Wallis, Ralph R\:{o}nnquist, Dennis Jarvis and Andrew Lucas",
title="The Automated Wingman --- Using JACK Intelligent Agents for Unmanned Autonomous Vehicles",
booktitle="IEEE Aerospace Conference",
month = "March",
address = "Big Sky, Montana, USA",
year= "2002"
}