https://www.examtopics.com/exams/google/associate-cloud-engineer/view/
Associate Cloud Engineer Exam - Free Actual Q&As, Page 1 | ExamTopics
www.examtopics.com
위의 GCP Associate 덤프 문제에 대한 풀의를 한다.
201. Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process. What should you do?
- A. In the Google Cloud console, visualize the costs related to the projects in the Reports section.
- B. In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.
- C. In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studio dashboard on top of the CSV export.
- D. Configure Cloud Billing data export to BigQuery for the billing account. Create a Looker Studio dashboard on top of the BigQuery export.
Looker Studio(구 Data Studio)에서 BigQuery 데이터를 연결하여 실시간 비용 분석 대시보드를 생성할 수 있다.
Cloud Billing 데이터를 BigQuery로 내보내면, SQL을 사용하여 맞춤형 분석이 가능하다.
정답 D
202. You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company’s security policies only allow the use of internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project. What should you do?
- A. Enable Private Service Access on the Cloud Storage Bucket.
- B. Add storage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list of protected projects.
- C. Enable Private Google Access on the subnet within the custom VPC.
- D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.
Private Service Access는 VPC와 Google 관리 서비스(VPC 네트워크 기반 서비스, 예: Cloud SQL)를 연결하는 기능으로, Cloud Storage에는 적용되지 않는다.
Private Google Access를 활성화하면, 인터넷 없이 내부 IP를 통해 Google Cloud 서비스(예: Cloud Storage, BigQuery 등)에 접근 가능하다.
Cloud NAT는 외부 IP 없이 인터넷에 접근할 수 있도록 하지만, Cloud Storage는 NAT 없이도 Private Google Access를 통해 접근 가능하다.
정답 C
203. Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed to your organization. You want to accomplish this task with minimal effort. What should you do?
- A. Use the projects.move method to move the project to your organization. Update the billing account of the project to that of your organization.
- B. Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup’s Google Cloud organization, and drag the project to your company's organization.
- C. Create a Private Catalog for the Google Cloud Marketplace, and upload the resources of the startup's production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your company’s project.
- D. Create an infrastructure-as-code template for all resources in the project by using Terraform, and deploy that template to a new project in your organization. Delete the project from the startup’s Google Cloud organization.
Google Cloud에서는 projects.move API를 사용하여 프로젝트를 한 조직에서 다른 조직으로 이동할 수 있다.
Terraform을 사용한 방법은 프로젝트를 직접 이동하는 것이 아니라, 리소스를 복제하는 방법이며, 기존 프로젝트 ID 및 데이터가 유지되지 않는다.
정답 A
204. All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so that each dev team can only create cloud resources in the United States (US). What should you do?
- A. Create a folder to contain all the dev projects. Create an organization policy to limit resources in US locations.
- B. Create an organization to contain all the dev projects. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.
- C. Create an Identity and Access Management (IAM) policy to restrict the resources locations in the US. Apply the policy to all dev projects.
- D. Create an Identity and Access Management (IAM) policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles.
Google Cloud에서는 조직 정책(Organization Policy)을 사용하여 리소스의 위치를 제한할 수 있다.
폴더(Folder)를 사용하여 개발팀의 모든 프로젝트를 그룹화한 후, 해당 폴더에 정책을 적용하면 전체 프로젝트에 정책이 상속된다.
IAM 정책은 리소스의 위치를 제한하는 기능을 제공하지 않는다.
정답 A
205. You are configuring Cloud DNS. You want to create DNS records to point home.mydomain.com, mydomain.com, and www.mydomain.com to the IP address of your Google Cloud load balancer. What should you do?
- A. Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME to mydomain.com respectively.
- B. Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com respectively.
- C. Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.
- D. Create one A record to point mydomain.com to the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.
A 레코드는 mydomain.com을 Google Cloud Load Balancer의 IP 주소로 직접 연결해야 한다.
CNAME 레코드는 http://www.mydomain.com과 home.mydomain.com을 mydomain.com으로 포워딩하여 관리 단순화한다.
AAAA 레코드는 IPv6 전용이고, NS 레코드는 네임서버를 지정하는 용도이다.
정답 C
206. You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet-a. Your application servers and web servers are running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you do?
- A. • Create service accounts sa-app and sa-db.
• Associate service account sa-app with the application servers and the service account sa-db with the database servers.
• Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db. - B. • Create network tags app-server and db-server.
• Add the app-server tag to the application servers and the db-server tag to the database servers.
• Create an egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server. - C. • Create a service account sa-app and a network tag db-server.
• Associate the service account sa-app with the application servers and the network tag db-server with the database servers.
• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses. - D. • Create a network tag app-server and service account sa-db.
• Add the tag to the application servers and associate the service account with the database servers.
• Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.
서비스 계정 기반 방화벽 규칙을 생성하는 것이 가장 안전한 접근 방식이기 때문에 애플리케이션 서버(sa-app)와 데이터베이스 서버(sa-db)에 각각 다른 서비스 계정을 할당하고, Ingress 방화벽 규칙을 생성하여 sa-app 서비스 계정에서 sa-db 서비스 계정으로 트래픽 허용하면 된다.
Egress(출구) 방화벽 규칙은 송신 트래픽을 제어한다.
정답 A
207. Your team wants to deploy a specific content management system (CMS) solution to Google Cloud. You need a quick and easy way to deploy and install the solution. What should you do?
- A. Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the solution.
- B. Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from Cloud Marketplace.
- C. Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
- D. Use the installation guide of the CMS provider. Perform the installation through your configuration management system.
Cloud Marketplace에서 직접 배포하면 설정 및 설치 과정이 자동화되어 있다.
정답 B
208. You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You want to allow all engineers to create new projects without asking them for their credit card information. What should you do?
- A. Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
- B. Grant all engineers permission to create their own billing accounts for each new project.
- C. Apply for monthly invoiced billing, and have a single invoice for the project paid by the finance team.
- D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Billing Account를 설정한 후, 엔지니어들이 해당 계정을 사용하여 프로젝트를 생성할 수 있도록 권한을 부여하면 신용카드 입력이 필요 없다.
PO 기반 청구는 대기업에 적합하며, 스타트업 단계에서는 실용적이지 않다.
정답 A
209. Your continuous integration and delivery (CI/CD) server can’t execute Google Cloud actions in a specific project because of permission issues. You need to validate whether the used service account has the appropriate roles in the specific project. What should you do?
- A. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels.
- B. Open the Google Cloud console, and check the organization policies.
- C. Open the Google Cloud console, and run a query to determine which resources this service account can access.
- D. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
CI/CD 서비스 계정이 특정 프로젝트에서 필요한 권한을 가지고 있는지 확인하려면 IAM 콘솔에서 역할(Role)과 권한을 점검해야 한다.
조직 정책은 권한 설정과 관련이 있지만, 서비스 계정의 특정 프로젝트 권한을 직접 확인하는 방법이 아니다.
정답 A
210. Your team is using Linux instances on Google Cloud. You need to ensure that your team logs in to these instances in the most secure and cost efficient way. What should you do?
- A. Attach a public IP to the instances and allow incoming connections from the internet on port 22 for SSH.
- B. Use the gcloud compute ssh command with the --tunnel-through-iap flag. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22.
- C. Use a third party tool to provide remote access to the instances.
- D. Create a bastion host with public internet access. Create the SSH tunnel to the instance through the bastion host.
SSH 포트를 인터넷에 개방하면 보안 위협이 커진다.
Cloud IAP(Identity-Aware Proxy)를 사용하면 공용 IP 없이 SSH 접속이 가능하여 보안이 강화되고, Google Cloud의 IAP 범위(35.235.240.0/20)를 허용하여 접근을 제한할 수 있다.
Bastion Host는 추가적인 관리 비용이 필요하다.
정답 B
211. An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when you grant the required permissions to this user. What should you do?
- A. Create a custom role, and add all the required compute.disks.list and compute.images.list permissions as includedPermissions. Grant the custom role to the user at the project level.
- B. Create a custom role based on the Compute Image User role. Add the compute.disks.list to the includedPermissions field. Grant the custom role to the user at the project level.
- C. Create a custom role based on the Compute Storage Admin role. Exclude unnecessary permissions from the custom role. Grant the custom role to the user at the project level.
- D. Grant the Compute Storage Admin role at the project level.
Google Cloud에서는 가능한 한 기본 역할을 재사용하는 것이 권장되지만, 기본적인 Compute Storage Admin 역할에는 disk과 image의 목록이 없기 때문에 든 Compute.disks.list 및 Compute.images.list 권한을 includedPermissions로 추가해줘야 한다.
정답 A
212. You are running a web application on Cloud Run for a few hundred users. Some of your users complain that the initial web page of the application takes much longer to load than the following pages. You want to follow Google’s recommendations to mitigate the issue. What should you do?
- A. Set the minimum number of instances for your Cloud Run service to 3.
- B. Set the concurrency number to 1 for your Cloud Run service.
- C. Set the maximum number of instances for your Cloud Run service to 100.
- D. Update your web application to use the protocol HTTP/2 instead of HTTP/1.1.
Cloud Run은 기본적으로 Cold Start 이슈가 존재하며, 트래픽이 없을 때 컨테이너를 종료하기 때문에 min-instances 값을 3으로 설정하면 최소 3개의 인스턴스가 항상 실행 상태를 유지하므로 첫 페이지 로딩 속도가 개선된다.
Concurrency를 1로 설정하는 것은 요청을 처리하는 인스턴스 수를 증가시키므로 성능 문제를 해결하기 위해 필요하다.
최대 인스턴스 수를 늘리는 것은 급격한 트래픽 증가를 대비하는 것이다.
정답 A
213. You are building a data lake on Google Cloud for your Internet of Things (IoT) application. The IoT application has millions of sensors that are constantly streaming structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on Google-recommended practices. What should you do?
- A. Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage.
- B. Stream data to Pub/Sub, and use Storage Transfer Service to send data to BigQuery.
- C. Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.
- D. Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
IoT 데이터 수집을 위해 Pub/Sub을 사용하여 이벤트 스트리밍을 처리하는 것이 Google의 권장 사항이다. Cloud Storage는 Data Lake의 주요 저장소로 활용되며, 비정형 및 구조화된 데이터를 모두 저장할 수 있다.
Storage Transfer Service는 정기적인 배치 작업을 위한 서비스이다.
Dataprep은 데이터 전처리를 위한 도구이다.
BigQuery는 분석 목적의 데이터 웨어하우스이다.
정답 A
214. You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0/20, and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do?
- A. Add a secondary IP range 10.1.0.0/20 to the subnet.
- B. Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/18.
- C. Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22.
- D. Convert the subnet IP range from IPv4 to IPv6.
Secondary IP Range는 주로 GKE(Google Kubernetes Engine) Pod 및 서비스에 사용된다.
현재 서브넷의 크기가 /20 (4096개의 IP)인데, 이를 /18 (16384개의 IP)로 확장하면 더 많은 IP 주소를 확보할 수 있다.
정답 B
215. Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company’s security policy also restricts developer permissions to Compute Engine, Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do?
- A. • Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.
• Copy the role across all projects created within the organization with the gcloud iam roles copy command.
• Assign the role to developers in those projects. - B. • Add all developers to a Google group in Google Groups for Workspace.
• Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level. - C. • Add all developers to a Google group in Cloud Identity.
• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization. - D. • Add all developers to a Google group in Cloud Identity.
• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level.
• Assign the custom role to the Google group.
Google Cloud 조직 수준에서 역할을 정의하면, 모든 프로젝트에서 일관된 권한을 적용할 수 있다. 따라서 Cloud Identity의 Google 그룹을 사용하면 사용자 추가/제거가 용이하여 관리가 간편하다.
Compute Engine, Cloud Functions, Cloud SQL 권한을 포함하는 Custom Role을 생성하면 필요 이상의 권한을 부여하지 않으면서도 보안 정책을 준수할 수 있다.
정답 D
216. You are working for a hospital that stores its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What should you do?
- A. Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub topic.
- B. Create a script that uses the gcloud storage command to synchronize the on-premises storage with Cloud Storage, Schedule the script as a cron job.
- C. Create a Pub/Sub topic, and create a Cloud Function connected to the topic that writes data to Cloud Storage. Create an application that sends all medical images to the Pub/Sub topic.
- D. In the Google Cloud console, go to Cloud Storage. Upload the relevant images to the appropriate bucket.
gcloud storage 명령어를 사용하여 로컬 저장소와 Cloud Storage를 동기화하는 스크립트를 작성하고, 이를 크론 잡으로 자동 실행하면 간편하고 신뢰성 있는 솔루션을 구현할 수 있다.
정답 B
217. Your company has an internal application for managing transactional orders. The application is used exclusively by employees in a single physical location. The application requires strong consistency, fast queries, and ACID guarantees for multi-table transactional updates. The first version of the application is implemented in PostgreSQL, and you want to deploy it to the cloud with minimal code changes. Which database is most appropriate for this application?
- A. Bigtable
- B. BigQuery
- C. Cloud SQL
- D. Firestore
Cloud SQL은 완전 관리형 PostgreSQL을 지원하며, 기존 애플리케이션을 거의 변경 없이 클라우드로 마이그레이션 가능하다.
정답 C
218. Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?
- A. Create an Instance Template with Spot VMs On. Create a Managed Instance Group from the template and adjust Target CPU Utilization. Migrate the workload.
- B. Migrate the workload to a Compute Engine VM. Start and stop the instance as needed.
- C. Migrate the workload to a Google Kubernetes Engine cluster with Spot nodes.
- D. Migrate the workload to a Compute Engine Spot VM.
Spot VM은 비용이 저렴하지만, 예기치 않게 중단될 수 있다.
Managed Instance Group(MIG)은 주로 확장 가능한 애플리케이션 배포에 적합하다.
워크로드를 Compute Engine VM으로 마이그레이션하고 필요에 따라 인스턴스를 시작 및 중지하면 작업이 실행되는 시기를 제어할 수 있다. 이 접근 방식은 배치 프로세스를 시작할 시기 측면에서 유연성을 제공하며, 매월 실행되도록 쉽게 예약할 수 있다. 그리고 작업이 실행되지 않을 때 인스턴스를 중지하면 컴퓨팅 비용을 절약할 수 있다.
정답 B
219. You are planning to migrate the following on-premises data management solutions to Google Cloud:
• One MySQL cluster for your main database
• Apache Kafka for your event streaming platform
• One Cloud SQL for PostgreSQL database for your analytical and reporting needs
You want to implement Google-recommended solutions for the migration. You need to ensure that the new solutions provide global scalability and require minimal operational and infrastructure management. What should you do?
- A. Migrate from MySQL to Cloud SQL, from Kafka to Pub/Sub, and from Cloud SQL for PostgreSQL to BigQuery.
- B. Migrate from MySQL to Cloud Spanner, from Kafka to Pub/Sub, and from Cloud SQL for PostgreSQL to BigQuery.
- C. Migrate from MySQL to Cloud Spanner, from Kafka to Memorystore, and from Cloud SQL for PostgreSQL to Cloud SQL.
- D. Migrate from MySQL to Cloud SQL, from Kafka to Memorystore, and from Cloud SQL for PostgreSQL to Cloud SQL.
MySQL → Cloud Spanner: Spanner는 글로벌 확장성이 뛰어나며, 관리 부담이 적다.
Kafka → Pub/Sub: Pub/Sub은 완전 관리형 메시징 서비스로 Kafka를 대체할 수 있다.
PostgreSQL → BigQuery: BigQuery는 대규모 분석 쿼리를 위한 최적의 데이터 웨어하우스 솔루션이다.
Cloud SQL은 지역 한정 서비스로, 글로벌 확장성이 부족하다.
Memorystore는 캐싱 서비스(Redis, Memcached)이다.
정답 B
220. During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain. You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?
- A. Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.
- B. Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.
- C. Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.
- D. Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users
조직 정책(Organization Policy)을 사용하여 특정 도메인(yourcompany.com)을 벗어난 사용자 계정을 자동으로 차단할 수 있다.
하지만 기존에 추가된 외부 사용자는 자동으로 제거되지 않으므로, 정책을 설정한 후 수동으로 제거해야 한다.
정답 D
221. Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?
- A. Use the gcloud compute instances update command with a REFRESH action for the VM.
- B. Use the gcloud compute instance-groups managed recreate-instances command to recreate the VM.
- C. Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.
- D. Update and apply the instance template of the MIG.
gcloud compute instance-groups managed recreate-instances 명령어를 사용하면 MIG 내에서 문제가 있는 VM을 삭제하고 동일한 템플릿을 기반으로 새 VM을 생성할 수 있다.
정답 B
222. You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?
- A. Use kubectl to create the label deleted-by-cnrm and to change its value to true for the topic resource.
- B. Use kubectl to delete the topic resource.
- C. Use gcloud CLI to delete the topic.
- D. Use gcloud CLI to update the topic label managed-by-cnrm to false.
Config Connector는 Kubernetes 기반 리소스 관리 도구이며, kubectl delete를 사용하여 리소스를 제거해야 한다.
Config Connector에서 관리하는 리소스는 gcloud CLI에서 직접 삭제할 수 없다.
정답 B
223. Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1,000 employees within 2 years. Most employees will need access to your company’s Google Cloud account. The systems and processes will need to support 10x growth without performance degradation, unnecessary complexity, or security issues. What should you do?
- A. Migrate the users to Active Directory. Connect the Human Resources system to Active Directory. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from Cloud Identity to Active Directory.
- B. Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity.
- C. Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation.
- D. Use a third-party identity provider service through federation. Synchronize the users from Google Workplace to the third-party provider in real time.
Google Cloud는 Cloud Identity를 통해 사용자 계정을 관리하며, 그룹을 활용하면 접근 제어가 간편해진다.
Identity Federation은 외부 ID 공급자를 활용하는 방식이다.
정답 B
224. You want to host your video encoding software on Compute Engine. Your user base is growing rapidly, and users need to be able to encode their videos at any time without interruption or CPU limitations. You must ensure that your encoding solution is highly available, and you want to follow Google-recommended practices to automate operations. What should you do?
- A. Deploy your solution on multiple standalone Compute Engine instances, and increase the number of existing instances when CPU utilization on Cloud Monitoring reaches a certain threshold.
- B. Deploy your solution on multiple standalone Compute Engine instances, and replace existing instances with high-CPU instances when CPU utilization on Cloud Monitoring reaches a certain threshold.
- C. Deploy your solution to an instance group, and increase the number of available instances whenever you see high CPU utilization in Cloud Monitoring.
- D. Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.
Instance Group을 사용하면 여러 VM을 자동으로 관리할 수 있으며, 가용성을 높일 수 있다. 추가적으로 자동 확장을 설정하면 CPU 사용량이 높아질 때 인스턴스를 추가하여 성능을 유지할 수 있다.
정답 D
225. Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to solve the instance creation problem. What should you do?
- A. Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.
- B. Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.
- C. Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.
- D. Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.
B는 인스턴스 템플릿에서 영구 디스크에 대해 다른/사용자 지정 이름을 설정할 수 없다.
C는 기존 인스턴스 템플릿을 업데이트하거나 수정할 수 없다.
D는 관리형 인스턴스 그룹에서 참조하는 경우 인스턴스 템플릿을 삭제할 수 없다.
다만 disks.autoDelete = true를 설정하면, 인스턴스 삭제 시 디스크도 자동으로 삭제되어 충돌을 방지할 수 있어 완전성을 위해 설정하는 것이 좋다.
정답 A
226. You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine. What should you do?
- A. Upload the image to Cloud Storage and create a Kubernetes Service referencing the image.
- B. Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image.
- C. Upload the image to Artifact Registry and create a Kubernetes Service referencing the image.
- D. Upload the image to Artifact Registry and create a Kubernetes Deployment referencing the image.
GKE(Google Kubernetes Engine)에서는 컨테이너 이미지를 Artifact Registry에 저장한 후 Deployment로 배포하는 것이 권장된다. 그리고 Deployment를 사용하면 롤링 업데이트, 자동 확장 등이 가능해진다.
정답 D
227. You are using Looker Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Looker Studio are broken, and you want to analyze the problem. What should you do?
- A. In Cloud Logging, create a filter for your Looker Studio report.
- B. Use the open source CLI tool, Snapshot Debugger, to find out why the data was not refreshed correctly.
- C. Review the Error Reporting page in the Google Cloud console to find any errors.
- D. Use the BigQuery interface to review the nightly job and look for any errors.
BigQuery 인터페이스에서 야간 배치 작업을 검토하면 테이블 덮어쓰기가 정상적으로 수행되었는지 확인할 수 있다.
정답 D
228. You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault-tolerant and can tolerate some of the VMs being terminated. The current cost of VMs is too high. What should you do?
- A. Run a test using simulated maintenance events. If the test is successful, use Spot N2 Standard VMs when running future jobs.
- B. Run a test using simulated maintenance events. If the test is successful, use N2 Standard VMs when running future jobs.
- C. Run a test using a managed instance group. If the test is successful, use N2 Standard VMs in the managed instance group when running future jobs.
- D. Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.
Spot VM을 사용하면 중단될 가능성이 있지만, 배치 작업은 일부 VM 종료를 견딜 수 있기 때문에 비용을 절감할 수 있다.
정답 A
229. You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?
- A. Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
- B. Use the Reports view in the Cloud Billing Console to view the desired cost information.
- C. Visit the Cost Table page to get a CSV export and visualize it using Looker Studio.
- D. Configure Billing Data Export to BigQuery and visualize the data in Looker Studio.
Pricing Calculator는 미래 비용을 추정하는 용도로 사용된다.
Billing Data Export를 BigQuery로 설정하면 여러 프로젝트의 비용 데이터를 통합하여 분석 가능하다.
Looker Studio를 사용하면 실시간으로 데이터를 시각화할 수 있으며, 새로운 비용 데이터가 자동으로 반영된다.
정답 D
230. Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?
- A. Upload the data to BigQuery using the bq command line tool.
- B. Upload the data to Cloud Storage using the gcloud storage command.
- C. Upload the data into Cloud SQL using the import function in the Google Cloud console.
- D. Upload the data into Cloud Spanner using the import function in the Google Cloud console.
Cloud Storage는 다양한 형식의 비정형 데이터를 저장하는 데 가장 적합하다.
정답 B
231. You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You want to diagnose the problem. What should you do?
- A. Navigate to Cloud Logging and view the application logs.
- B. Configure a health check on the instance and set a “consecutive successes” Healthy threshold value of 1.
- C. Connect to the instance’s serial console and read the application logs.
- D. Install and configure the Ops agent and view the logs from Cloud Logging.
Ops Agent가 설치되지 않은 경우 로그가 자동으로 수집되지 않는다.
Ops Agent는 Compute Engine에서 실행되는 애플리케이션 및 시스템 로그를 자동으로 Cloud Logging으로 전송한다. Cloud Logging에서 로그를 확인하면 애플리케이션 오류를 쉽게 진단할 수 있다.
정답 D
232. You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You want to follow Google-recommended practices. What should you do?
- A. Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
- B. Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs.
- C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
- D. Open the Google Cloud console and run gcloud init --project in a Cloud Shell.
Google Cloud 프로젝트에서 서비스를 사용하려면 관련 API를 활성화해야 하는데 Compute Engine 및 Cloud Storage API를 개별적으로 활성화하는 것이 권장된다.
정답 B
233. Your application development team has created Docker images for an application that will be deployed on Google Cloud. Your team does not want to manage the infrastructure associated with this application. You need to ensure that the application can scale automatically as it gains popularity. What should you do?
- A. Create an instance template with the container image, and deploy a Managed Instance Group with Autoscaling.
- B. Upload Docker images to Artifact Registry, and deploy the application on Google Kubernetes Engine using Standard mode.
- C. Upload Docker images to the Cloud Storage, and deploy the application on Google Kubernetes Engine using Standard mode.
- D. Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.
이미지의 경우 Artifact Registry를 사용해야한다.
GKE는 Kubernetes 클러스터를 관리해야 하므로, 관리 부담이 크다.
Cloud Run은 서버리스 환경에서 컨테이너를 실행하며, 인프라를 관리할 필요 없다. 따라서 트래픽 증가에 따라 자동 확장이 가능하므로, 개발팀이 인프라 운영을 신경 쓰지 않아도 된다.
정답 D
234. You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that any data used by the application will be immediately available if a zonal failure occurs. What should you do?
- A. Store the application data on a zonal persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
- B. Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached.
- C. Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
- D. Store the application data on a regional persistent disk. If an outage occurs, create an instance in another zone with this disk attached.
Regional Persistent Disk는 자동으로 데이터를 여러 존에 복제하여, 특정 존에서 장애가 발생해도 데이터를 즉시 사용할 수 있다.
정답 D
235. The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google’s recommendations for setting permissions for the DevOps group. What should you do?
- A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
- B. Create an IAM policy and grant all compute.instanceAdmin.* permissions to the policy. Attach the policy to the DevOps group.
- C. Create a custom role at the folder level and grant all compute.instanceAdmin.* permissions to the role. Grant the custom role to the DevOps group.
- D. Grant the basic role roles/editor to the DevOps group.
roles/compute.admin 역할은 Compute Engine 리소스(인스턴스, 디스크, 네트워크 등)에 대한 완전한 관리 권한을 제공한다.
roles/viewer는 프로젝트의 다른 리소스를 읽을 수 있도록 하지만 변경 권한은 없다.
roles/editor는 프로젝트 내 모든 리소스를 변경할 수 있는 강력한 권한으로, 최소 권한 원칙에 위배된다.
정답 A
236. Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google Cloud cloud solution. What should you do?
- A. Use your existing CI/CD pipeline. Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints.
- B. Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated Docker images and deploy them to Cloud Function. Use the same configuration as on-premises.
- C. Use the existing codebase and deploy each service as a separate Cloud Function. Update the configurations and the required endpoints.
- D. Use your existing codebase and deploy each service as a separate Cloud Run. Use the same configurations as on-premises.
Cloud Run은 서버리스 컨테이너 실행 환경으로, 인프라를 관리할 필요 없이 자동 확장이 가능하다. 따라서 기존 Docker 이미지를 그대로 사용할 수 있으며, 환경 변수 기반 설정도 쉽게 유지 가능하다.
Cloud Functions는 일반적으로 단일 기능을 처리하는 작은 서비스에 적합하다.
정답 A
237. You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n2-standard machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?
- A. Assign the pods of the image rendering microservice a higher pod priority than the other microservices.
- B. Create a node pool with compute-optimized machine type nodes for the image rendering microservice. Use the node pool with general-purpose machine type nodes for the other microservices.
- C. Use the node pool with general-purpose machine type nodes for the image rendering microservice. Create a node pool with compute-optimized machine type nodes for the other microservices.
- D. Configure the required amount of CPU and memory in the resource requests specification of the image rendering microservice deployment. Keep the resource requests for the other microservices at the default.
이미지 렌더링 작업은 CPU 집약적이므로, Compute-Optimized 머신(c2-standard 등)을 사용하면 성능을 최적화할 수 있다.
기타 마이크로서비스는 일반적인 n2-standard 노드를 사용하는 것이 적절하다.
정답 B
238. You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices. What should you do?
- A. Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.
- B. Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
- C. Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.
- D. Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.
GKE Autopilot은 자동으로 노드 관리를 수행하므로 운영 부담이 적고 안정성이 높다.
Stable Release Channel을 사용하면 검증된 최신 안정 버전을 사용하여 신뢰성을 확보할 수 있다.
Zonal 클러스터는 가용성이 낮아 지역 장애가 발생할 경우 서비스 중단이 발생할 수 있다.
정답 B
239. You are responsible for a web application on Compute Engine. You want your support team to be notified automatically if users experience high latency for at least 5 minutes. You need a Google-recommended solution with no development cost. What should you do?
- A. Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your web application’s latency.
- B. Create an alert policy to send a notification when the HTTP response latency exceeds the specified threshold.
- C. Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification in case of anomalies.
- D. Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the response latency exceeds the specified threshold.
Cloud Monitoring에서 경고(Alert Policy)를 설정하면 HTTP 응답 지연이 특정 임계값을 초과할 경우 자동으로 알림을 보낼 수 있다.
정답 B
240. You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost. What should you do?
- A. Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container.
- B. Create a container for the set of binaries. Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application.
- C. Upload the code to Cloud Functions. Use Cloud Scheduler to start the application.
- D. Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance.
Cloud Run Job은 최대 60분 동안 실행 가능하지만, 대규모 파일을 메모리에서 처리하는 애플리케이션에는 적절하지 않다.
기존의 바이너리를 변경하지 않고 최소한의 작업(Lift & Shift)으로 클라우드에 배포하려면 Compute Engine을 사용하여 그대로 실행하는 것이 가장 적절하다.
정답 D
241. You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters: prod-cluster and dev-cluster.
• prod-cluster is a standard cluster.
• dev-cluster is an auto-pilot cluster.
When you run the kubectl get nodes command, you only see the nodes from prod-cluster. Which commands should you run to check the node status for dev-cluster?
- A. gcloud container clusters get-credentials dev-cluster
kubectl get nodes - B. gcloud container clusters update -generate-password dev-cluster kubectl get nodes
- C. kubectl config set-context dev-cluster
kubectl cluster-info - D. kubectl config set-credentials dev-cluster
kubectl cluster-info
kubectl get nodes를 실행할 때, 현재 인증된 클러스터의 노드 목록만 표시되기 때문에 Autopilot 클러스터(dev-cluster)에 대해 인증 정보(credentials)를 가져와야 한다.
정답 A
242. You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company. You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
• Service account keys should only be valid for one day.
You need a Google-recommended solution that minimizes cost. What should you do?
- A. Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.
- B. Implement a Kubernetes CronJob to rotate all service account keys periodically. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
- C. Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.
- D. Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
Google Cloud 조직 정책(Org Policy)을 사용하면 서비스 계정 키의 최대 수명을 24시간으로 제한할 수 있다.
서비스 계정 키 생성을 금지하는 정책을 적용하되, 예외적으로 pj-sa 프로젝트에서는 키를 생성할 수 있도록 허용한다.
DENY 정책을 설정하면 24시간 만료를 적용할 수 없으며, 키 생성을 아예 막아버린다.
정답 C
243. Your company is running a three-tier web application on virtual machines that use a MySQL database. You need to create an estimated total cost of cloud infrastructure to run this application on Google Cloud instances and Cloud SQL. What should you do?
- A. Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate sheet, import the current Google Cloud prices and use these prices for the calculations within formulas.
- B. Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your web application with as much detail as possible.
- C. Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller scale. Check the billing information, and calculate the estimated costs based on the real load your system usually handles.
- D. Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource you expect to use. Use similar size instances for the web server, and use your current on-premises machines as a comparison for Cloud SQL.
Cloud Operations(모니터링 및 로깅)는 웹 애플리케이션 비용을 추정하는 데 적절하지 않다.
Google Cloud Pricing Calculator는 예상 리소스 사용량을 기반으로 비용을 계산할 수 있는 공식 도구이다.
현재 온프레미스(MySQL) 환경과 비교하여 Cloud SQL을 적절한 크기로 설정하여 비용을 예측한다.
정답 D
244. You have a Bigtable instance that consists of three nodes that store personally identifiable information (PII) data. You need to log all read or write operations, including any metadata or configuration reads of this database table, in your company’s Security Information and Event Management (SIEM) system. What should you do?
- A. • Navigate to Cloud Monitoring in the Google Cloud console, and create a custom monitoring job for the Bigtable instance to track all changes.
• Create an alert by using webhook endpoints, with the SIEM endpoint as a receiver. - B. • Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for the Bigtable instance.
• Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM. - C. • Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read, Data Write and Admin Read logs for the Bigtable instance.
• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic. - D. • Install the Ops Agent on the Bigtable instance during configuration.
• Create a service account with read permissions for the Bigtable instance.
• Create a custom Dataflow job with this service account to export logs to the company’s SIEM system.
Bigtable에서의 모든 데이터 읽기, 쓰기, 및 관리 작업을 감사 로그(Audit Logs)에서 활성화해야한다.
Cloud Logging Sink를 설정하여 로그를 Pub/Sub으로 전송하고, 이를 SIEM 시스템에서 구독(Subscribe)하여 분석 가능하다.
Bigtable 인스턴스에서 직접 Ops Agent를 사용하여 로그를 수집하는 방법은 지원되지 않는다.
정답 C
245. You want to set up a Google Kubernetes Engine cluster. Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?
- A. Deploy a private autopilot cluster.
- B. Deploy a public autopilot cluster.
- C. Deploy a standard public cluster and enable shielded nodes.
- D. Deploy a standard private cluster and enable shielded nodes.
Autopilot 클러스터는 관리 부담을 줄이면서 비용을 최적화할 수 있는 GKE 옵션으로, 자동으로 노드를 관리한다.
Private cluster를 사용하면 노드가 인터넷에서 접근할 수 없도록 설정 가능하다.
정답 A
246. Your company wants to migrate their on-premises workloads to Google Cloud. The current on-premises workloads consist of:
• A Flask web application
• A backend API
• A scheduled long-running background job for ETL and reporting
You need to keep operational costs low. You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud. What should you do?
- A. Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Compute Engine.
- B. Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Cloud Run.
- C. Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Cloud Run.
- D. Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Compute Engine.
App Engine: Flask 웹 애플리케이션을 서버리스 환경에서 쉽게 실행할 수 있다.
Cloud Run: 백엔드 API를 컨테이너로 배포할 수 있으며, 자동 확장이 가능하다.
Cloud Tasks + Cloud Run: 배경 작업(ETL 및 보고서 생성)을 실행하는 가장 비용 효율적인 방법이다.
Compute Engine을 사용하면 관리 부담이 증가하고 비용이 증가할 수 있다.
Compute Engine은 서버리스 솔루션보다 비용이 높고 관리 부담이 크다.
정답 B
247. Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?
- A. • Attach a single service account to the compute instances.
• Add minimal rights to the service account.
• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources. - B. • Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure provisioning.
• Use the human approvals IAM account for the provisioning. - C. • Attach a single service account to the compute instances.
• Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources. - D. • Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions.
• Use a secret manager service to store the key files of the service accounts.
• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
CI/CD가 직접 Cloud Identity 사용자를 가장하는 것은 보안상 위험하다.
자동화된 CI/CD 파이프라인에서는 사람이 개입하는 것이 비효율적이다.
Secret Manager를 사용하여 서비스 계정 키를 안전하게 저장하고, 필요한 경우에만 액세스 가능하도록 설정한다.
정답 D
248. Your application stores files on Cloud Storage by using the Standard Storage class. The application only requires access to files created in the last 30 days. You want to automatically save costs on files that are no longer accessed by the application. What should you do?
- A. Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days.
- B. Create a cron job in Cloud Scheduler to call a Cloud Functions instance every day to delete files older than 30 days.
- C. Create a retention policy on the storage bucket of 30 days, and lock the bucket by using a retention policy lock.
- D. Enable object versioning on the storage bucket and add lifecycle rules to expire non-current versions after 30 days.
Cloud Storage 객체 수명 주기(Object Lifecycle Management) 기능을 사용하여, 30일이 지난 파일을 Archive Storage로 이동하면 저장 비용을 크게 절감할 수 있다.
정답 A
249. Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of the workload's resource requirements or how the requirements might vary depending on usage patterns, external dependencies, or other factors. You need a solution that makes cost-effective recommendations regarding CPU and memory requirements, and allows the workload to function consistently in any situation. You want to follow Google-recommended practices. What should you do?
- A. Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler for suggestions.
- B. Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions.
- C. Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Cluster autoscaler for suggestions.
- D. Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Horizontal Pod Autoscaler for suggestions.
Cluster Autoscaler는 노드 개수를 조정하는 역할을 한다.
Horizontal Pod Autoscaler (HPA)는 워크로드 부하에 따라 Pod 개수를 동적으로 조정하여 가용성을 유지한다.
Vertical Pod Autoscaler (VPA) Recommendations는 CPU 및 메모리 요구 사항을 모니터링하여 적절한 리소스 사용을 추천한다.
정답 B
250. You need to migrate invoice documents stored on-premises to Cloud Storage. The documents have the following storage requirements:
• Documents must be kept for five years.
• Up to five revisions of the same invoice document must be stored, to allow for corrections.
• Documents older than 365 days should be moved to lower cost storage tiers.
You want to follow Google-recommended practices to minimize your operational and development costs. What should you do?
- A. Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function to move or delete your documents based on their metadata.
- B. Enable retention policies on the bucket, use lifecycle rules to change the storage classes of the objects, set the number of versions, and delete old files.
- C. Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions instance to move or delete your documents based on their metadata.
- D. Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files.
Retention 정책은 데이터를 강제 보관하는 기능이므로, 버전 관리 및 비용 절감과 직접적으로 관련 없다.
그리고 수명 주기 규칙만으로는 유지할 버전 수를 설정할 수 없다
오브젝트 버전 관리(Object Versioning)를 활성화하면 최대 5개의 버전을 유지할 수 있다.
Cloud Storage 수명 주기(Lifecycle Rules)를 설정하여 365일 이후에는 저비용 스토리지(Coldline/Archive)로 자동 이동 가능하다.
정답 D
'컴퓨터 > 클라우드' 카테고리의 다른 글
[GCP Associate Cloud Engineer] Associate Cloud Engineer 샘플 문제 (0) | 2025.02.22 |
---|---|
GCP Associate 덤프 문제 251 ~ 283 (0) | 2025.02.21 |
GCP Associate 덤프 문제 151 ~ 200 (1) | 2025.02.18 |
GCP Associate 덤프 문제 101 ~ 150 (1) | 2025.02.16 |
GCP Associate 덤프 문제 51 ~ 100 (1) | 2025.02.14 |