FaisalZ.Qureshi1,DemetriTerzopoulos1,2,andPiotrJasiobedzki3
1
Dept.ofComputerScience,UniversityofToronto,Toronto,ONM5S3GA,Canada
faisal,dt@cs.toronto.edu
2
CourantInstitute,NewYorkUniversity,NewYork,NY10003,USA
dt@nyu.edu
3
MDRoboticsLimited,Brampton,ONL6S4J3,Canada
pjasiobe@mdrobotics.ca
Abstract.Wepresentacognitively-controlledvisionsystemthatcombineslow-levelobjectrecognitionandtrackingwithhigh-levelsymbolicreasoningwiththepracticalpurposeofsolvingdifficultspaceroboticsproblems—satelliteren-dezvousanddocking.Thereasoningmodule,whichencodesamodeloftheen-vironment,performsdeliberationto1)guidethevisionsysteminatask-directedmanner,2)activatevisionmodulesdependingontheprogressofthetask,3)vali-datetheperformanceofthevisionsystem,and4)suggestcorrectionstothevisionsystemwhenthelatterisperformingpoorly.Reasoningandrelatedelements,amongthemintention,context,andmemory,contributetoimprovetheperfor-mance(i.e.,robustness,reliability,andusability).Wedemonstratethevisionsys-temcontrollingaroboticarmthatautonomouslycapturesafree-flyingsatellite.Currentlysuchoperationsareperformedeithermanuallyorbyconstructingde-tailedcontrolscripts.Themanualapproachiscostlyandexposestheastronautstodanger,whilethescriptedapproachistediousanderror-prone.Therefore,thereissubstantialinterestinperformingtheseoperationsautonomously,andtheworkpresentedhereisastepinthisdirection.Tothebestofourknowledge,thisistheonlysatellite-capturingsystemthatreliesexclusivelyonvisiontoestimatetheposeofthesatelliteandcandealwithanuncooperativesatellite.
1Introduction
Sincetheearliestdaysofthefield,computervisionresearchershavestruggledwiththechallengeofeffectivelycombininglow-levelvisionwithclassicalartificialintelligence.SomeoftheearliestworkinvolvedthecombinationofimageanalysisandsymbolicAItoconstructautonomousrobots[1,2].Theseattemptsmetwithlimitedsuccessbecausethevisionproblemwashard,andthefocusofvisionresearchshiftedfromvertically-integrated,embodiedvisionsystemstolow-level,stand-alonevisionsystems.Currentlyavailablelow-andmedium-levelvisionsystemsaresufficientlycompetenttosupportsubsequentlevelsofprocessing.Consequently,thereisnowarenewedinterestinhigh-level,orcognitivevision,whichisnecessaryifwearetorealizeautonomousrobotscapableofperformingusefulwork.Inthispaper,wepresentanembodied,task-orientedvisionsystemthatcombinesobjectrecognitionandtrackingwithhigh-levelsymbolicreasoning.Thelatterencodesasymbolicmodeloftheenvironmentandusesthemodeltoguidethevisionsysteminatask-directedmanner.
Wedemonstratethesystemguidingaroboticmanipulatorduringasatelliteservic-ingoperationinvolvingrendezvousanddockingwithamockupsatelliteunderlightingconditionssimilartothoseinorbit.On-orbitsatelliteservicingisthetaskofmaintain-ingandrepairingasatelliteinitsorbit.Itextendstheoperationallifeofthesatellite,mitigatestechnicalrisks,andreduceson-orbitlosses,soitisofparticularinteresttomultiplestakeholders,includingsatelliteoperators,manufacturers,andinsurancecom-panies.Currently,on-orbitsatelliteservicingoperationsarecarriedoutmanually;i.e.,byanastronaut.However,mannedmissionsusuallyhaveahighpricetagandtherearehumansafetyconcerns.Unmanned,tele-operated,ground-controlledmissionsarein-feasibleduetocommunicationsdelays,intermittence,andlimitedbandwidthbetweenthegroundandtheservicer.Aviableoptionistodevelopthecapabilityofautonomoussatelliterendezvousanddocking(AR&D).Mostnationalandinternationalspaceagen-ciesrealizetheimportantfutureroleofAR&Dandhavetechnologyprogramstode-velopthiscapability[3,4].
Autonomyentailsthattheon-boardcontrollerbecapableofestimatingandtrackingthepose(positionandorientation)ofthetargetsatelliteandguidingtheservicingspace-craftasit1)approachesthesatellite,2)manoeuvresitselftogetintodockingposition,and3)dockswiththesatellite.Ourvisionsystemmeetsthesechallengesbycontrollingthevisualprocessandreasoningabouttheeventsthatoccurinorbit—theseabilitiesfallunderthedomainof“cognitivevision.”Oursystemfunctionsasfollows:(Step1)capturedimagesareprocessedtoestimatethecurrentpositionandorientationofthesatellite(Fig.1),(Step2)behavior-basedperceptionandmemoryunitsusecontextualinformationtoconstructasymbolicdescriptionofthescene,(Step3)thecognitivemoduleusesknowledgeaboutscenedynamicsencodedusingsituationcalculustocon-structasceneinterpretation,andfinally(Step4)thecognitivemoduleformulatesaplantoachievethecurrentgoal.ThesceneinterpretationconstructedinStep3providesamechanismtoverifythefindingsofthevisionsystem.Theabilitytoplanallowsthesystemtohandleunforeseensituations.
Toourknowledge,thesystemdescribedhereisuniqueinasmuchasitistheonlyAR&Dsystemthatusesvisionasitsprimarysensorandthatcandealwithanuncooper-ativetargetsatellite.OtherAR&Dsystemseitherdealwithcooperativetargetsatellites,wherethesatelliteitselfcommunicateswiththeservicercraftaboutitsheadingand
Fig.1.Imagesobservedduringsatellitecapture.Theleftandcenterimageswerecapturedusingtheshuttlebaycameras.Therightimagewascapturedbytheend-effectorcamera.Thecenterimageshowsthearminhoveringpositionpriortothefinalcapturephase.Theshuttlecrewusetheseimagesduringsatelliterendezvousandcapturetolocatethesatelliteatadistanceofap-proximately100m,toapproachit,andtocaptureitwiththeCanadarm—theshuttlemanipulator.
pose,oruseothersensingaids,suchasradarsandgeostationarypositionsatellitesys-tems[5].1.1
RelatedWork
ThestateoftheartinspaceroboticsistheMarsExplorationRover,Spirit,thatisnowvisitingMars[6].Spiritisprimarilyatele-operatedrobotthatiscapableoftakingpic-tures,driving,andoperatinginstrumentsinresponsetocommandstransmittedfromtheground.Itlacksanycognitiveorreasoningabilities.Themostsuccessfulautonomousrobottodatethathascognitiveabilitiesis“Minerva,”whichtakesvisitorsontoursthroughtheSmithsonian’sNationalMuseumofAmericanHistory;however,visionisnotMinerva’sprimarysensor[7].Minervahasahostofothersensorsatitsdisposalincludinglaserrangefindersandsonars.Suchsensorsareundesirableforspaceopera-tions,whichhavesevereweight/energylimitations.
Asurveyofworkaboutconstructinghigh-leveldescriptionsfromvideocanbyfoundin[8].Knowledgemodelingforthepurposesofsceneinterpretationcaneitherbehand-crafted[9]orautomatic[10](asinmachinelearning).Thesecondapproachisnotfeasibleforourapplication:Itrequiresalargetrainingset,whichisdifficulttogatherinourdomain,inordertoensurethatthesystemlearnsalltherelevantknowledge,anditisnotalwaysclearwhatthesystemhaslearnt.Scenedescriptionsconstructedin[11]arericherthanthoseinoursystem,andtheirconstructionapproachismoresound;however,theydonotusescenedescriptionstocontrolthevisualprocessandformulateplanstoachievegoals.
Inthenextsection,weexplaintheobjectrecognitionandtrackingmodule.Sec-tion3describesthehigh-levelvisionmodule.Section4describesthephysicalsetupandpresentsresults.Section5presentsourconclusions.
2ObjectRecognitionandTracking
Theobjectrecognitionandtrackingmodule[12]processesimagesfromacalibratedpassivevideocamera-pairmountedontheend-effectoroftheroboticmanipulatorandcomputesanestimateoftherelativepositionandorientationofthetargetsatellite.It
ConfigurationControlVisionServerMonitoringid, 3D pose, 3D motion, confidenceServicerControllerUser InterfaceData Log3D location, 3D motionAcquisitionSparse 3DComputation3D Data3D Model (Satellite)id3D poseTrackingTargetDetection3D Model (Target)Target PoseEstimation &Tracking3D motion3D pose3D motion3D poseFig.2.Objectrecognitionandtrackingsystem.
supportsmediumandshortrangesatelliteproximityoperations;i.e.,approximatelyfrom20mto0.2m.
Duringthemediumrangeoperation,thevisionsystemcamerasvieweitherthecom-pletesatelliteorasignificantportionofit(image1inFig.3),andthesystemreliesonnaturalfeaturesobservedinstereoimagestoestimatethemotionandposeofthesatel-lite.Themediumrangeoperationconsistsofthefollowingthreephases:
–Inthefirstphase(model-freemotionestimation),thevisionsystemcombinesstereoandstructure-from-motiontoindirectlyestimatethesatellitemotioninthecamerareferenceframebysolvingforthecameramotion,whichisjusttheoppositeofthesatellitemotion[13].
–Thesecondphase(motion-basedposeacquisition)performsbinarytemplatematch-ingtoestimatetheposeofthesatellitewithoutusingpriorinformation[14].Itmatchesamodeloftheobservedsatellitewiththe3Ddataproducedbythelastphaseandcomputesarigidtransformation,generallycomprising3translationsand3rotations,thatrepresenttherelativeposeofthesatellite.Thesixdegreesoffree-dom(DOFs)oftheposearesolvedintwosteps.Thefirststep,whichismotivatedbytheobservationthatmostsatelliteshaveanelongatedstructure,determinesthemajoraxisofthesatellite,andthesecondstepsolvesthefourunresolvedDOFs—therotationaroundthemajoraxisandthethreetranslations—byanexhaustive3DtemplatematchingovertheremainingfourDOFs.
–Thelastphase(model-basedposetracking)tracksthesatellitewithhighprecisionandupdateratebyiterativelymatchingthe3Ddatawiththemodelusingaversionoftheiterativeclosestpointalgorithm[15].Thisschemedoesnotmatchhigh-levelfeaturesinthescenewiththemodelateveryiteration.Thisreducesitssensitiv-itytopartialshadows,occlusion,andlocallossofdatacausedbyreflectionsandimagesaturation.Undernormaloperativeconditions,modelbasedtrackingreturnsanestimateofthesatellite’sposeat2Hzwithanaccuracyontheorderofafewcentimetersandafewdegrees.
Atcloserange,thetargetsatelliteisonlypartiallyvisibleanditcannotbeviewedsimultaneouslyfrombothcameras(thesecondandthirdimagesinFig.3);hence,thevisionsystemprocessesmonocularimages.Theconstraintsontheapproachtrajectory
Fig.3.Imagesfromasequencerecordedduringanexperiment(firstimageat5m;thirdat0.2m)
ensurethatthedockinginterfaceonthetargetsatelliteisvisiblefromcloserange,somarkersonthedockinginterfaceareusedtodeterminetheposeandattitudeofthesatelliteefficientlyandreliablyatcloserange[12].Here,visualfeaturesaredetectedbyprocessinganimagewindowcenteredaroundtheirpredictedlocations.Thesefeaturesarethenmatchedagainstamodeltoestimatetheposeofthesatellite.Theposeesti-mationalgorithmrequiresatleast4pointstocomputethepose.Whenmorethanfourpointsarevisible,samplingtechniqueschoosethegroupofpointsthatgivesthebestposeinformation.Fortheshortrangevisionmodule,theaccuracyisontheorderofafractionofadegreeand1mmrightbeforedocking.
Thevisionsystemcanbeconfiguredontheflydependingupontherequirementsofaspecificmission.Itprovidescommandstoactivate/initialize/deactivateaparticularconfiguration.Thevisionsystemreturnsa4x4matrixthatspecifiestherelativeposeofthesatellite,avaluebetween0and1quantifyingtheconfidenceinthatestimate,andvariousflagsthatdescribethestateofthevisionsystem.
3CognitiveVisionController
Thecognitivevisioncontrollercontrolstheimagerecognitionandtrackingmodulebytakingintoaccountseveralfactors,including1)thecurrenttask,2)thecurrentstateoftheenvironment,3)theadvicefromthesymbolicreasoningmodule,and4)thecharac-teristicsofthevisionmodule,includingprocessingtimes,operationalranges,andnoise.Itconsistsofabehavior-based,reactiveperceptionandmemoryunitandahigh-leveldeliberativeunit.Thebehavior-basedunitactsasaninterfacebetweenthedetailed,con-tinuousworldofthevisionsystemandtheabstract,discreteworldrepresentationusedbythecognitivecontroller.Thisdesignfacilitatesavisioncontrollerwhosedecisionsreflectbothshort-termandlong-termconsiderations.3.1
PerceptionandMemory:SymbolicSceneDescription
Theperceptionandmemoryunitperformsmanycriticalfunctions.First,itprovidestightfeedbackloopsbetweensensingandactionthatarerequiredforreflexivebehavior,suchasclosingthecameras’shutterswhendetectingstrongglareinordertopreventharm.Second,itcorroboratesthereadingsfromthevisionsystembymatchingthemagainsttheinternalworldmodel.Third,itmaintainsanabstractedworldstate(AWS)thatrepresentstheworldatasymboliclevelandisusedbythedeliberativemodule.Fourth,itresolvestheissuesofperceptiondelaysbyprojectingtheinternalworldmodel
CloseNearMediumFarUpdate memory or flagdangerSensorFusion&SignalSmoothing.5m1.5m5mMatchCapturedSatellite DistanceBadBadGoodGoodProjectionMonitorCaptureAbstractedWorldStateEgomotion
Passage of timeWorkingMemoryActiveBehavior00.670.81Satellite Pose Confidence(a)(b)
Fig.4.(a)Behavior-basedperceptionandmemoryunit.(b)Theabstractedworldstaterepre-sentstheworldsymbolically.Forexample,thesatelliteiseitherCaptured,Close,Near,Medium,orFar.Theconversionfromnumericalquantitiesinthememorycentertothesymbolsintheabstractedworldstatetakesintoaccountthecurrentsituation.Forexample,translationfromnu-mericalvalueofsatelliteposeconfidencetothesymbolicvalueGoodorBaddependsupontheactivebehavior—forbehaviorMonitor,satellitepositionconfidenceisGoodwhenitisgreaterthan0.67;whereasforbehaviorCapturesatellitepositionconfidenceisGoodonlywhenitisgreaterthan0.8.
at“this”instant.Fifth,itperformssensorfusiontocombineinformationfrommultiplesensors;e.g.,whenthevisionsystemreturnsmultipleestimatesofthesatellite’spose.Finally,itensuresthattheinternalmentalstatereflectstheeffectsofegomotionandthepassageoftime.
Ateachinstant,theperceptionunitreceivesthemostcurrentinformationfromtheactivevisionconfigurations(Fig.2)andcomputesanestimateofthesatellitepositionandorientation.Indoingso,ittakesintoaccountcontextualinformation,suchasthecurrenttask,thepredicteddistancefromthesatellite,theoperationalrangesofvariousvisionconfigurations,andtheconfidencevaluesreturnedbytheactiveconfigurations.Anαβtrackerthenvalidatesandsmoothesthecomputedpose.Validationisdonebycomparingthenewposeagainstthepredictedposeusinganadaptivethreshold.
Theservicercraftseesitsenvironmentegocentrically.Thememorycentercon-stantlyupdatestheinternalworldrepresentationtoreflectthecurrentposition,head-ing,andspeedoftherobot.Italsoensuresthatintheabsenceofnewreadingsfromtheperceptioncentertheconfidenceintheworldstateshoulddecreasewithtime.Thereactivemodulerequiresdetailedsensoryinformation,whereasthedeliberativemoduledealswithabstractfeaturesabouttheworld.ThememorycenterfiltersoutunnecessarydetailsfromthesensoryinformationandgeneratestheAWS(Fig.4)whichdescribestheworldsymbolically.3.2
SymbolicReasoning:PlanningandSceneInterpretation
Thesymbolicreasoningmoduleconstructsplans1)toaccomplishgoalsand2)toex-plainthechangesintheAWS.TheplanthatbestexplainstheevolutionoftheAWSisaninterpretationofthescene,asitconsistsofeventsthatmighthavehappenedtobringaboutthechangesintheAWS.ThecognitivevisionsystemmonitorstheprogressofthecurrenttaskbyexaminingtheAWS,whichismaintainedinreal-timebytheperceptionandmemorymodule.Uponencounteringanundesirablesituation,thereasoningmod-uletriestoexplaintheerrorsbyconstructinganinterpretation.Ifthereasoningmodule
successfullyfindsasuitableinterpretation,itsuggestsappropriatecorrectivesteps;oth-erwise,itsuggeststhedefaultprocedureforhandlinganomaloussituations.
Thecurrentprototypeconsistsoftwoplanners:PlannerAspecializesinthesatel-litecapturingtaskandPlannerBmonitorstheabstractedworldstateanddetectsandresolvesundesirablesituations.WehavedevelopedtheplannersinGOLOG,whichisanextensionofthesituationcalculus[16].GOLOGuseslogicalstatementstomaintainaninternalworldstate(fluents)anddescribewhatactionsanagentcanperform(primi-tiveactionpredicates),whentheseactionsarevalid(preconditionpredicates),andhowtheseactionsaffecttheworld(successorstatepredicates).GOLOGprovideshigh-levelconstructs,suchasprocedurecalls,conditionals,loops,andnon-deterministicchoice,tospecifycomplexproceduresthatmodelanagentanditsenvironment.ThelogicalfoundationsofGOLOGenableustoproveplancorrectnessproperties,whichisdesir-able.
Actions
aTurnon(_)aLatch(_)
aErrorHandle(_)aSensor(_,_)aSearch(_)aMonitoraAlignaContactaGo(_,_,_)
aSatAttCtrl(_)aCorrectSatSpeed
FluentsfStatusfLatchfSensorfErrorfSatPos
fSatPosConffSatCenterfSatAlignfSatSpeedfSatAttCtrlfSatContact
Initial State:
fStatus(off), fLatch(unarmed), fSensor(all,off),
fSatPos(medium), fSatPosConf(no), fSatCenter(no), fAlign(no),fSatAttCtrl(on), fSatContact(no), fSatSpeed(yes), fError(no)Goal State:
fSatContact(yes)The Plan:
aTurnon(on), aSensor(medium,on), aSearch(medium), aMonitor,aGo(medium,near,vis), aSensor(short,on), aSensor(medium,off),aAlign, aLatch(arm), aSatAttCtrl(off), aContact
aBadCameraaSelfShadowaGlareaSun(_)aRange(_)
fSatPosConffSunfRange
Initial State:fRange(unknown),fSun(unknown),fSatPosConf(yes)
Goal State:fSatConf(no)
Explanation 1: aBadCamera (Default)Solution 1: aRetry
Explanation 2: aSun(front), aGlareSolution 2: aAbort
Explanation 3: aRange(near),aSun(behind), aSelfShadow
Solution 3: aRetryAfterRandomInterval
Fig.5.ExamplesoftheplansgeneratedbyPlannerAandPlannerB.
Theplannerscooperatetoachievethegoal—safelycapturingthesatellite.Thetwoplannersinteractthroughaplanexecutionandmonitoringunit,whichusesplanexecu-tioncontrolknowledgeUponreceivinganew“satellitecapturetask”fromthegroundstation,theplanexecutionandmonitoringmoduleactivatesPlannerA,whichgeneratesaplanthattransformsthecurrentstateoftheworldtothegoalstate—astatewherethesatelliteissecured.PlannerB,ontheotherhand,isonlyactivatedwhentheplanex-ecutionandmonitoringmoduledetectsaproblem,suchasasensorfailure.PlannerBgeneratesallplansthatwilltransformthelastknown“good”worldstatetothecurrent“bad”worldstate.Next,itdeterminesthemostlikelycauseforthecurrentfaultbycon-sideringeachplaninturn.Afteridentifyingthecause,PlannerBsuggestscorrections.Inthecurrentprototype,correctionsconsistof“abortmission,”“retryimmediately,”and“retryafterarandomintervaloftime”(thetaskisabortedifthetotaltimeexceedsthemaximumallowedtimeforthecurrenttask).Finally,afterthesuccessfulhandlingofthesituation,PlannerAresumes.
4Results
Wehavetestedthecognitivevisioncontrollerinasimulatedvirtualenvironmentandinaphysicallabenvironmentthatfaithfullyreproducestheilluminationconditionsofthespaceenvironment—stronglightsource,verylittleambientlight,andharshshadows.ThephysicalsetupconsistedoftheMDRoboticsLtd.proprietary“ReuseableSpaceVehiclePayloadHandlingSimulator,”comprisingtwoFanucroboticmanipulatorsandtheassociatedcontrolsoftware.Onerobotwiththecamerastereopairmountedonitsendeffectoractsastheservicer.Theotherrobotcarriesagrapplefixture-equippedsatellitemockupandexhibitsrealisticsatellitemotion.
Thecognitivevisioncontrollermetitsrequirements;i.e.,safelycapturingthesatel-liteusingvision-basedsensing(seeFig.3forthekindofimagesused),whilehandlinganomaloussituations.Weperformed800testrunsinthesimulatedenvironmentandover25testrunsonthephysicalrobots.Thecontrollerneverjeopardizeditsownsafetyorthatofthetargetsatellite.Itgracefullyrecoveredfromsensingerrors.Inmostcases,itwasabletoguidethevisionsystemtore-acquirethesatellitebyidentifyingthecauseandinitiatingasuitablesearchpattern.Insituationswhereitcouldnotresolvetheerror,itsafelyparkedthemanipulatorandinformedthegroundstationofitsfailure.
Fig.6.Thechaserrobotcapturesthesatelliteusingvisioninharshlightingconditionslikethoseinorbit.
5Conclusion
Futureapplicationsofcomputervisionshallrequiremorethanjustlow-levelvision;theywillalsohaveahigh-levelAIcomponenttoguidethevisionsysteminatask-directedanddeliberativemanner,diagnosesensingproblems,andsuggestcorrectivesteps.Also,anALifeinspired,reactivemodulethatimplementscomputationalmodelsofattention,context,andmemorycanactastheinterfacebetweenthevisionsystemandthesymbolicreasoningmodule.Wehavedemonstratedsuchasystemwithinthecontextofspacerobotics.OurpracticalvisionsysteminterfacesobjectrecognitionandtrackingwithclassicalAIthroughabehavior-basedperceptionandmemoryunit,anditsuccessfullyperformsthecomplextaskofautonomouslycapturingafree-flyingsatel-liteinharshenvironmentalconditions.Afterreceivingasinglehigh-level“dock”com-mand,thesystemsuccessfullycapturedthetargetsatelliteinmostofourtests,whilehandlinganomaloussituationsusingitsreactiveandreasoningabilities.
Acknowledgments
TheauthorsacknowledgethevaluabletechnicalcontributionsofR.Gillett,H.K.Ng,S.Greene,J.Richmond,Dr.M.Greenspan,M.Liu,andA.Chan.ThisworkwasfundedbyMDRoboticsLimitedandPrecarnAssociates.
References
[1]Roberts,L.:Machineperceptionof3-dsolids.InTrippit,J.,Berkowitz,D.,Chapp,L.,
Koester,C.,Vanderburgh,A.,eds.:OpticalandElectro-OpticalInformationProcessing,MITPress(1965)159–197
[2]Nilsson,N.J.:Shakeytherobot.TechnicalReport323,ArtificialIntelligenceCenter.SRI
International,MenloPark,USA(1984)
[3]Wertz,J.,Bell,R.:Autonomousrendezvousanddockingtechnologies—statusand
prospects.In:SPIE’s17thAnnualInternationalSymposiumonAerospace/DefenseSens-ing,Simulation,andControls,Orlando,USA(2003)
[4]Gurtuna,O.:Emergingspacemarkets:Enginesofgrowthforfuturespaceactivities(2003)
www.futuraspace.com/EmergingSpaceMarkets_fact_sheet.htm.
[5]Polites,M.:Anassessmentofthetechnologyofautomatedrendezvousandcapturein
space.TechnicalReportNASA/TP-1998-208528,MarshallSpaceFlightCenter,Alabama,USA(1998)
[6]NASA,J.P.L.:Marsexplorationrovermissionhome(2004)marsrovers.nasa.gov.[7]Burgard,W.,Cremers,A.B.,Fox,D.,Hahnel,D.,Lakemeyer,G.,Schulz,D.,Steiner,W.,
Thrun,S.:Experienceswithaninteractivemuseumtour-guiderobot.ArtificialIntelligence114(1999)3–55
[8]Howarth,R.J.,Buxton,H.:Conceptualdescriptionsfrommonitoringandwatchingimage
sequences.ImageandVisionComputing18(2000)105–135
[9]Arens,M.,Nagel,H.H.:Behavioralknowledgerepresentationfortheunderstandingand
creationofvideosequences.InGunther,A.,Kruse,R.,Neumann,B.,eds.:Proceedingsofthe26thGermanConferenceonArtificialIntelligence(KI-2003),Hamburg,Germany(2003)149–163
[10]Fernyhough,J.,Cohn,A.G.,Hogg,D.C.:Constructingqualitativeeventmodelsautmati-callyfromvideoinput.ImageandVisionComputing18(2000)81–103
[11]Arens,M.,Ottlik,A.,Nagel,H.H.:Naturallanguagetextsforacognitivevisionsystem.In
vanHarmelen,F.,ed.:Proceedingsofthe15thEuropeanConferenceonArtificialIntelli-gence(ECAI-2002),Amsterdam,TheNetherlands,IOSPress(2002)455–459
[12]Jasiobedzki,P.,Greenspan,M.,Roth,G.,Ng,H.,Witcomb,N.:Video-basedsystemfor
satelliteproximityoperations.In:7thESAWorkshoponAdvancedSpaceTechnologiesforRoboticsandAutomation(ASTRA2002),ESTEC,Noordwijk,TheNetherlands(2002)[13]Roth,G.,Whitehead,A.:Usingprojectivevisiontofindcamerapositionsinanimage
sequence.In:VisionInterface(VI2000),Montreal,Canada(2000)87–94
[14]Greenspan,M.,Jasiobedzki,P.:Posedeterminationofafree-flyingsatellite.In:Motion
TrackingandObjectRecognition(MTOR02),LasVegas,USA(2002)
[15]Jasiobedzki,P.,Greenspan,M.,Roth,G.:Posedeterminationandtrackingforautonomous
satellitecapture.In:Proceedingsofthe6thInternationalSymposiumonArtificialIntelli-genceandRobotics&AutomationinSpace(i-SAIRAS01),Montreal,Canada(2001)[16]Lesp´erance,Y.,Reiter,R.,Lin,F.,Scherl,R.:GOLOG:Alogicprogramminglanguagefor
dynamicdomains.JournalofLogicProgramming31(1997)59–83
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- hids.cn 版权所有 赣ICP备2024042780号-1
违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务