Showing the single result
Username or email address *
Password *
Log in
Lost your password? Remember me
No account yet?
ai-102
Practice makes perfect! Take this quiz now to test your knowledge and boost your confidence for the real exam.
1 / 5
HOTSPOT You are developing the shopping on-the-go project. You are configuring access to the QnA Maker resources. Which role should you assign to AllUsers and LeadershipTeam? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Box 1: QnA Maker Editor Scenario: Provide all employees with the ability to edit Q&As. The QnA Maker Editor (read/write) has the following permissions: Create KB API Update KB API Replace KB API Replace Alterations "Train API" [in new service model v5] Box 2: Contributor Scenario: Only senior managers must be able to publish updates. Contributor permission: All except ability to add new members to roles Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/reference-role-based-access- control
2 / 5
DRAG DROP You are developing the smart e-commerce project. You need to design the skillset to include the contents of PDFs in searches. How should you complete the skillset design diagram? To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Box 1: Azure Blob storage At the start of the pipeline, you have unstructured text or non-text content (such as images, scanned documents, or JPEG files). Data must exist in an Azure data storage service that can be accessed by an indexer. Box 2: Computer Vision API Scenario: Provide users with the ability to search insight gained from the images, manuals, and videos associated with the products. The Computer Vision Read API is Azure's latest OCR technology (learn what's new) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents. Box 3: Translator API Scenario: Product descriptions, transcripts, and all text must be available in English, Spanish, and Portuguese. Box 4: Azure Files Scenario: Store all raw insight data that was generated, so the data can be processed later. Incorrect Answers: The custom vision API from Microsoft Azure learns to recognize specific content in imagery and becomes smarter with training and time. Reference: https://docs.microsoft.com/en-us/azure/search/cognitive-search-concept-intro https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr
3 / 5
DRAG DROP You are planning the product creation project. You need to recommend a process for analyzing videos. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Choose four.)
Scenario: All videos must have transcripts that are associated to the video and included in product descriptions. Product descriptions, transcripts, and all text must be available in English, Spanish, and Portuguese. Step 1: Upload the video to blob storage Given a video or audio file, the file is first dropped into a Blob Storage. T Step 2: Index the video by using the Video Indexer API. When a video is indexed, Video Indexer produces the JSON content that contains details of the specified video insights. The insights include: transcripts, OCRs, faces, topics, blocks, etc. Step 3: Extract the transcript from the Video Indexer API. Step 4: Translate the transcript by using the Translator API. Reference: https://azure.microsoft.com/en-us/blog/get-video-insights-in-even-more-languages/ https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-output-json-v2
4 / 5
HOTSPOT You are planning the product creation project. You need to build the REST endpoint to create the multilingual product descriptions. How should you complete the URI? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Box 1: api-nam.cognitive.microsofttranslator.com https://docs.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-reference Box 2: /translate Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-translate
5 / 5
You are developing the smart e-commerce project. You need to implement autocompletion as part of the Cognitive Search solution. Which three actions should you perform? Each correct answer presents part of the solution. (Choose three.) NOTE: Each correct selection is worth one point. query parameter. E. Set the searchAnalyzer property for the three product name variants. F. Set the analyzer property for the three product name variants.
Scenario: Support autocompletion and autosuggestion based on all product name variants. A: Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using an API. API usage is illustrated in the following call to the Autocomplete REST API. POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2020-06-30 { "search": "minecraf", "suggesterName": "sg" } B: In Azure Cognitive Search, typeahead or "search-as-you-type" is enabled through a suggester. A suggester provides a list of fields that undergo additional tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead. F. Use the default standard Lucene analyzer ("analyzer": null) or a language analyzer (for example, "analyzer": "en.Microsoft") on the field. Reference: https://docs.microsoft.com/en-us/azure/search/index-add-suggesters
Your score is
Restart quiz