A) Define security group(s) to allow all HTTP inbound/outbound traffic and assign those security group(s) to the Amazon SageMaker notebook instance.
B) ?onfigure the Amazon SageMaker notebook instance to have access to the VPC. Grant permission in the KMS key policy to the notebook's KMS role.
C) Assign an IAM role to the Amazon SageMaker notebook with S3 read access to the dataset. Grant permission in the KMS key policy to that role.
D) Assign the same KMS key used to encrypt data in Amazon S3 to the Amazon SageMaker notebook instance.
Correct Answer
verified
Multiple Choice
A) AWS DMS
B) Amazon Kinesis Data Streams
C) Amazon Kinesis Data Firehose
D) Amazon Kinesis Data Analytics
Correct Answer
verified
Multiple Choice
A) Build a content-based filtering recommendation engine with Apache Spark ML on Amazon EMR
B) Build a collaborative filtering recommendation engine with Apache Spark ML on Amazon EMR.
C) Build a model-based filtering recommendation engine with Apache Spark ML on Amazon EMR
D) Build a combinative filtering recommendation engine with Apache Spark ML on Amazon EMR
Correct Answer
verified
Multiple Choice
A) AWS Glue as the data catalog; Amazon Kinesis Data Streams and Amazon Kinesis Data Analytics for real-time data insights; Amazon Kinesis Data Firehose for delivery to Amazon ES for clickstream analytics; Amazon EMR to generate personalized product recommendations
B) Amazon Athena as the data catalog: Amazon Kinesis Data Streams and Amazon Kinesis Data Analytics for near-real-time data insights; Amazon Kinesis Data Firehose for clickstream analytics; AWS Glue to generate personalized product recommendations
C) AWS Glue as the data catalog; Amazon Kinesis Data Streams and Amazon Kinesis Data Analytics for historical data insights; Amazon Kinesis Data Firehose for delivery to Amazon ES for clickstream analytics; Amazon EMR to generate personalized product recommendations
D) Amazon Athena as the data catalog; Amazon Kinesis Data Streams and Amazon Kinesis Data Analytics for historical data insights; Amazon DynamoDB streams for clickstream analytics; AWS Glue to generate personalized product recommendations
Correct Answer
verified
Multiple Choice
A) Use encryption keys that are stored in AWS Cloud HSM to encrypt the ML data volumes, and to encrypt the model artifacts and data in Amazon S3.
B) Use SageMaker built-in transient keys to encrypt the ML data volumes. Enable default encryption for new Amazon Elastic Block Store (Amazon EBS) volumes.
C) Use customer managed keys in AWS Key Management Service (AWS KMS) to encrypt the ML data volumes, and to encrypt the model artifacts and data in Amazon S3.
D) Use AWS Security Token Service (AWS STS) to create temporary tokens to encrypt the ML storage volumes, and to encrypt the model artifacts and data in Amazon S3.
Correct Answer
verified
Multiple Choice
A) Use the Amazon SageMaker k-Nearest-Neighbors (kNN) algorithm on the single time series consisting of the full year of data with a predictor_type of regressor . Use the Amazon SageMaker k-Nearest-Neighbors (kNN) algorithm on the single time series consisting of the full year of data with a predictor_type of regressor .
B) Use Amazon SageMaker Random Cut Forest (RCF) on the single time series consisting of the full year of data.
C) Use the Amazon SageMaker Linear Learner algorithm on the single time series consisting of the full year of data with a predictor_type of regressor . Use the Amazon SageMaker Linear Learner algorithm on the single time series consisting of the full year of data with a
D) Use the Amazon SageMaker Linear Learner algorithm on the single time series consisting of the full year of data with a predictor_type of classifier . classifier
Correct Answer
verified
Multiple Choice
A) Decision tree
B) Linear support vector machine (SVM)
C) Naive Bayesian classifier
D) Single Perceptron with sigmoidal activation function
Correct Answer
verified
Multiple Choice
A) Decrease the cooldown period for the scale-in activity. Increase the configured maximum capacity of instances.
B) Replace the current endpoint with a multi-model endpoint using SageMaker.
C) Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inference endpoint.
D) Increase the cooldown period for the scale-out activity.
Correct Answer
verified
Multiple Choice
A) Increase the number of S3 prefixes for the delivery stream to write to.
B) Decrease the retention period for the data stream.
C) Increase the number of shards for the data stream.
D) Add more consumers using the Kinesis Client Library (KCL) .
Correct Answer
verified
Multiple Choice
A) Apply dimensionality reduction by using the principal component analysis (PCA) algorithm.
B) Drop the features with low correlation scores by using a Jupyter notebook.
C) Apply anomaly detection by using the Random Cut Forest (RCF) algorithm.
D) Concatenate the features with high correlation scores by using a Jupyter notebook.
Correct Answer
verified
Multiple Choice
A) Review SageMaker logs that have been written to Amazon S3 by leveraging Amazon Athena and Amazon QuickSight to visualize logs as they are being produced.
B) Generate an Amazon CloudWatch dashboard to create a single view for the latency, memory utilization, and CPU utilization metrics that are outputted by Amazon SageMaker.
C) Build custom Amazon CloudWatch Logs and then leverage Amazon ES and Kibana to query and visualize the log data as it is generated by Amazon SageMaker.
D) Send Amazon CloudWatch Logs that were generated by Amazon SageMaker to Amazon ES and use Kibana to query and visualize the log data
Correct Answer
verified
Multiple Choice
A) Implement the solution using AWS Deep Learning Containers and run the container as a job using AWS Batch on a GPU-compatible Spot Instance
B) Implement the solution using a low-cost GPU-compatible Amazon EC2 instance and use the AWS Instance Scheduler to schedule the task
C) Implement the solution using AWS Deep Learning Containers, run the workload using AWS Fargate running on Spot Instances, and then schedule the task using the built-in task scheduler
D) Implement the solution using Amazon ECS running on Spot Instances and schedule the task using the ECS service scheduler
Correct Answer
verified
Multiple Choice
A) Use regression on customer profile data to understand key characteristics of consumer segments. Find similar profiles on social media
B) Use clustering on customer profile data to understand key characteristics of consumer segments. Find similar profiles on social media
C) Use a recommendation engine on customer profile data to understand key characteristics of consumer segments. Find similar profiles on social media.
D) Use a decision tree classifier engine on customer profile data to understand key characteristics of consumer segments. Find similar profiles on social media.
Correct Answer
verified
Multiple Choice
A) Amazon Comprehend syntax analysis and entity detection
B) Amazon SageMaker BlazingText cbow mode Amazon SageMaker BlazingText cbow mode
C) Natural Language Toolkit (NLTK) stemming and stop word removal
D) Scikit-leam term frequency-inverse document frequency (TF-IDF) vectorizer
Correct Answer
verified
Multiple Choice
A) Plot a histogram of the features and compute their standard deviation. Remove features with high variance.
B) Plot a histogram of the features and compute their standard deviation. Remove features with low variance.
C) Build a heatmap showing the correlation of the dataset against itself. Remove features with low mutual correlation scores.
D) Run a correlation check of all features against the target variable. Remove features with low target variable correlation scores.
Correct Answer
verified
Multiple Choice
A) Linear regression is inappropriate. The residuals do not have constant variance.
B) Linear regression is inappropriate. The underlying data has outliers.
C) Linear regression is appropriate. The residuals have a zero mean.
D) Linear regression is appropriate. The residuals have constant variance.
Correct Answer
verified
Multiple Choice
A) Use a proxy server at each local office and for each camera, and stream the RTSP feed to a unique Amazon Kinesis Video Streams video stream. On each stream, use Amazon Rekognition Video and create a stream processor to detect faces from a collection of known employees, and alert when non-employees are detected.
B) Use a proxy server at each local office and for each camera, and stream the RTSP feed to a unique Amazon Kinesis Video Streams video stream. On each stream, use Amazon Rekognition Image to detect faces from a collection of known employees and alert when non-employees are detected.
C) Install AWS DeepLens cameras and use the DeepLens_Kinesis_Video module to stream video to Amazon Kinesis Video Streams for each camera. On each stream, use Amazon Rekognition Video and create a stream processor to detect faces from a collection on each stream, and alert when non-employees are detected. Install AWS DeepLens cameras and use the DeepLens_Kinesis_Video module to stream video to Amazon Kinesis Video Streams for each camera. On each stream, use Amazon Rekognition Video and create a stream processor to detect faces from a collection on each stream, and alert when non-employees are detected.
D) Install AWS DeepLens cameras and use the DeepLens_Kinesis_Video module to stream video to Amazon Kinesis Video Streams for each camera. On each stream, run an AWS Lambda function to capture image fragments and then call Amazon Rekognition Image to detect faces from a collection of known employees, and alert when non-employees are detected. module to stream video to Amazon Kinesis Video Streams for each camera. On each stream, run an AWS Lambda function to capture image fragments and then call Amazon Rekognition Image to detect faces from a collection of known employees, and alert when non-employees are detected.
Correct Answer
verified
Multiple Choice
A) The training channel identifying the location of training data on an Amazon S3 bucket.
B) The validation channel identifying the location of validation data on an Amazon S3 bucket.
C) The IAM role that Amazon SageMaker can assume to perform tasks on behalf of the users.
D) Hyperparameters in a JSON array as documented for the algorithm used.
E) The Amazon EC2 instance class specifying whether training will be run using CPU or GPU.
F) The output path specifying where on an Amazon S3 bucket the trained model will persist.
Correct Answer
verified
Multiple Choice
A) Use AWS IoT Analytics for ingestion, storage, and further analysis. Use Jupyter notebooks from within AWS IoT Analytics to carry out analysis for anomalies.
B) Use Amazon S3 for ingestion, storage, and further analysis. Use an Amazon EMR cluster to carry out Apache Spark ML k-means clustering to determine anomalies.
C) Use Amazon S3 for ingestion, storage, and further analysis. Use the Amazon SageMaker Random Cut Forest (RCF) algorithm to determine anomalies.
D) Use Amazon Kinesis Data Firehose for ingestion and Amazon Kinesis Data Analytics Random Cut Forest (RCF) to perform anomaly detection. Use Kinesis Data Firehose to store data in Amazon S3 for further analysis.
Correct Answer
verified
Multiple Choice
A) Recall
B) Misclassification rate
C) Mean absolute percentage error (MAPE)
D) Area Under the ROC Curve (AUC)
Correct Answer
verified
Showing 1 - 20 of 159
Related Exams