====================
Azure Configuration
====================
To get started with Azure you go to https://azure.microsoft.com/en-us/ and sign up and you should eventually get to https://portal.azure.com

From the portal, add a resource and select "AI + Machine Learning" and then add "Speech Services" and "Computer Vision" APIs (there are only two for Azure as we use the Computer Vision one for both stills and video). You will need to give these "services" names – you can use anything you like. I call mine "CatDV_SpeechAPI_test" and "CatDV_VisionAPI_test".

Then go to each of these services in your dashboard and click on the "Keys" section – and copy/paste the keys and put somewhere safe – you will need these to configure the Azure steps. Azure gives you two keys for each service – but you can use either one – makes no difference.

NOTE: Azure's "Celebrities" Vision API is a limited feature. If you have this option enabled in your organisation, feel free to enable it on the worker action. https://learn.microsoft.com/en-us/legal/cognitive-services/computer-vision/limited-access  

====================
CatDV Azure AI v1.5
====================
Annotate Video:
- allow "landmarks" and "celebrities" to be optional on worker action.