컴퓨터/클라우드

GCP Associate 덤프 문제 251 ~ 283

sidedoor 2025. 2. 21. 10:34

https://www.examtopics.com/exams/google/associate-cloud-engineer/view/

 

Associate Cloud Engineer Exam - Free Actual Q&As, Page 1 | ExamTopics

 

www.examtopics.com

위의 GCP Associate 덤프 문제에 대한 풀의를 한다.

 

251. You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credential from being logged. What should you do?

  • A. Configure username and password by using gcloud config set proxy/username and gcloud config set proxy/password commands.
  • B. Encode username and password in sha256 encoding, and save in to a text file. Use filename as a value in the gcloud config set core/custom_ca_certs_file command.
  • C. Provide values for CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY_PASSWORD in the gcloud CLI tool configuration file.
  • D. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY_PASSWORD properties by using environment variables in your command line tool.

환경 변수(CLOUDSDK_PROXY_USERNAME 및 CLOUDSDK_PROXY_PASSWORD)를 설정하면 gcloud CLI 로그에 직접 기록되지 않는다.

정답 D

252. Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime. Other parts of the application are critical and must always be available. You need to configure a Google Kubernetes Engine cluster while optimizing for cost. What should you do?

  • A. Create a cluster with a single node-pool by using standard VMs. Label he fault-tolerant Deployments as spot_true.
  • B. Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot_false.
  • C. Create a cluster with both a Spot VM node pool and a node pool by using standard VMs. Deploy the critical deployments on the Spot VM node pool and the fault-tolerant deployments on the node pool by using standard VMs.
  • D. Create a cluster with both a Spot VM node pool and a nods pool by using standard VMs. Deploy the critical deployments on the node pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.

비용 최적화를 위해 오류 허용성이 높은(Fault-tolerant) 작업은 Spot VM에서 실행하여 비용을 절감하고, 항상 가용성이 필요한(Critical) 워크로드는 Standard VM에서 실행한다.

정답 D

253. You need to deploy an application in Google Cloud using serverless technology. You want to test a new version of the application with a small percentage of production traffic. What should you do?

  • A. Deploy the application to Cloud Run. Use gradual rollouts for traffic splitting.
  • B. Deploy the application to Google Kubernetes Engine. Use Anthos Service Mash for traffic splitting.
  • C. Deploy the application to Cloud Functions. Specify the version number in the functions name.
  • D. Deploy the application to App Engine. For each new version, create a new service.

Cloud Run의 트래픽 분할(Traffic Splitting) 기능을 활용하면 특정 비율의 사용자를 새로운 버전으로 유도 가능하다.

Gradual Rollout을 사용하면 점진적으로 새 버전을 테스트하면서 안정성을 검증할 수 있다.

정답 A

254. Your company's security vulnerability management policy wants a member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance. This Compute Engine instance hosts a critical application in your Google Cloud project. You need to implement your company's security vulnerability management policy. What should you do?

  • A. • Ensure that the Ops Agent is installed on the Compute Engine instance.
    • Create a custom metric in the Cloud Monitoring dashboard.
    • Provide the security team member with access to this dashboard.
  • B. • Ensure that the Ops Agent is installed on the Compute Engine instance.
    • Provide the security team member roles/osconfig.inventoryViewer permission.
  • C. • Ensure that the OS Config agent is installed on the Compute Engine instance.
    • Provide the security team member roles/osconfig.vulnerabilityReportViewer permission.
  • D. • Ensure that the OS Config agent is installed on the Compute Engine instance.
    • Create a log sink to BigQuery dataset.
    • Provide the security team member with access to this dataset.

Ops Agent는 성능 모니터링용으로 사용된다.

Inventory Viewer는 시스템 정보(패키지 목록, OS 버전)를 조회한다.

OS Config Agent는 Compute Engine의 운영 체제 업데이트 및 보안 취약점 정보를 관리하는 역할을 한다.
보안 팀이 취약점 정보를 확인하려면 osconfig.vulnerabilityReportViewer 역할을 부여하면 된다.

정답 C

255. You want to enable your development team to deploy new features to an existing Cloud Run service in production. To minimize the risk associated with a new revision, you want to reduce the number of customers who might be affected by an outage without introducing any development or operational costs to your customers. You want to follow Google-recommended practices for managing revisions to a service. What should you do?

  • A. Ask your customers to retry access to your service with exponential backoff to mitigate any potential problems after the new revision is deployed.
  • B. Gradually roll out the new revision and split customer traffic between the revisions to allow rollback in case a problem occurs.
  • C. Send all customer traffic to the new revision, and roll back to a previous revision if you witness any problems in production.
  • D. Deploy your application to a second Cloud Run service, and ask your customers to use the second Cloud Run service.

Cloud Run의 트래픽 분할(Traffic Splitting) 기능을 사용하면 새로운 기능을 점진적으로 배포 가능하다.

정답 B

256. You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account. What should you do?

  • A. Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
  • B. Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
  • C. Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key.
  • D. Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key.

IAP는 Google 계정이 필요하므로 적절하지 않다.

컨설턴트가 직접 SSH 키 쌍을 생성하고, 관리자는 인스턴스에 공개 키를 추가하면 보안이 강화되고, 컨설턴트는 개인 키(Private Key)를 사용하여 SSH로 안전하게 접근 가능하다.

정답 C

257. After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor unexpected firewall changes and instance creation. Your company prefers simple solutions. What should you do?

  • A. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage. Use BigQuery to periodically analyze log events in the storage bucket.
  • B. Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts.
  • C. Install Kibana on a compute instance. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages to the Kibana instance. Analyze the logs on Kibana in real time.
  • D. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.

Cloud Logging에서 필터를 생성하여 방화벽 변경 및 인스턴스 생성 이벤트를 감지 가능하다.
로그 기반 메트릭을 생성한 후 알림(Alert)을 설정하여 실시간 모니터링 가능하다.



258. You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to BigQuery datasets in the crm-databases project. You want to follow Google-recommended practices to grant access to the service account in the web-applications project. What should you do?

  • A. Grant "project owner" for web-applications appropriate roles to crm-databases.
  • B. Grant "project owner" role to crm-databases and the web-applications project.
  • C. Grant "project owner" role to crm-databases and roles/bigquery.dataViewer role to web-applications.
  • D. Grant roles/bigquery.dataViewer role to crm-databases and appropriate roles to web-applications.

BigQuery 데이터셋을 읽기 위해서는 bigquery.dataViewer 역할을 부여해야 한다.
웹 애플리케이션 프로젝트의 서비스 계정이 이 역할을 가지면 BigQuery 데이터에 접근 가능하다.

정답 D

259. Your Dataproc cluster runs in a single Virtual Private Cloud (VPC) network in a single subnetwork with range 172.16.20.128/25. There are no private IP addresses available in the subnetwork. You want to add new VMs to communicate with your cluster using the minimum number of steps. What should you do?

  • A. Modify the existing subnet range to 172.16.20.0/24.
  • B. Create a new Secondary IP Range in the VPC and configure the VMs to use that range.
  • C. Create a new VPC network for the VMs. Enable VPC Peering between the VMs'VPC network and the Dataproc cluster VPC network.
  • D. Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. Configure a custom Route exchange.

서브넷 범위를 확장(172.16.20.0/24)하면 추가 VM이 할당될 수 있는 IP가 증가한다.

정답 A



260. You are building a backend service for an ecommerce platform that will persist transaction data from mobile and web clients. After the platform is launched, you expect a large volume of global transactions. Your business team wants to run SQL queries to analyze the data. You need to build a highly available and scalable data store for the platform. What should you do?

  • A. Create a multi-region Cloud Spanner instance with an optimized schema.
  • B. Create a multi-region Firestore database with aggregation query enabled.
  • C. Create a multi-region Cloud SQL for PostgreSQL database with optimized indexes.
  • D. Create a multi-region BigQuery dataset with optimized tables.

Cloud Spanner는 글로벌 트랜잭션을 지원하는 분산형 RDBMS이며, SQL 쿼리 실행이 가능하다.

Firestore는 NoSQL 데이터베이스로 트랜잭션과 SQL 쿼리에 적합하지 않다.
Cloud SQL는 수직 확장만 가능하고, 대규모 글로벌 트랜잭션을 처리하기 어렵다.
BigQuery는 대용량 분석 쿼리는 가능하지만, 트랜잭션을 저장하는 용도로는 적절하지 않다.

정답 A


261. You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's organization as in your own organization. What should you do?

  • A. In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization.
  • B. In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
  • C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
  • D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company's organization as the destination.

gcloud iam roles copy 명령어를 사용하면 기존 조직의 역할을 새로운 조직으로 복사할 수 있다.

정답 C

262. You need to extract text from audio files by using the Speech-to-Text API. The audio files are pushed to a Cloud Storage bucket. You need to implement a fully managed, serverless compute solution that requires authentication and aligns with Google-recommended practices. You want to automate the call to the API by submitting each file to the API as the audio file arrives in the bucket. What should you do?

  • A. Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-TextAPI.
  • B. Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.
  • C. Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.
  • D. Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

App Engine은 이벤트 기반 처리를 위해 적절한 솔루션이 아니다.

Cloud Functions는 완전한 서버리스 솔루션으로 관리 부담이 없다.
Cloud Storage의 이벤트(파일 업로드)를 감지하여 자동으로 Speech-to-Text API를 호출 가능하다.

정답 D

263. Your customer wants you to create a secure website with autoscaling based on the compute instance CPU load. You want to enhance performance by storing static content in Cloud Storage. Which resources are needed to distribute the user traffic?

  • A. An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend.
  • B. An external network load balancer pointing to the backend instances to distribute the load evenly. The web servers will forward the request to the Cloud Storage as needed.
  • C. An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS traffic.
  • D. An external HTTP(S) load balancer to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on the instance.

Managed SSL Certificates를 사용하면 자동으로 갱신되기 때문에 컴퓨팅 인스턴스에 설치 및 갱신이 필요 없다.
URL 맵을 사용하여 정적 콘텐츠를 Cloud Storage에서 제공하면 된다.

네트워크 로드 밸런서는 HTTP 트래픽을 처리할 수 없으며, URL 맵을 지원하지 않는다.
HTTPS 인증서를 인스턴스에 설치는 확장성과 관리 편의성 부족하다.

정답 A



264. The core business of your company is to rent out construction equipment at large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?

  • A. Create files in Cloud Storage as data comes in.
  • B. Create a file in Filestore per device, and append new data to that file.
  • C. Ingest the data into Cloud SQL. Use multiple read replicas to match the throughput.
  • D. Ingest the data into Bigtable. Create a row key based on the event timestamp.

Bigtable은 높은 쓰기 처리량을 지원하는 NoSQL 데이터베이스로, 시계열 데이터 저장에 적합하다.

정답 D



265. You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do before you run the gcloud compute instances list command? (Choose two.)

  • A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
  • B. Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it.
  • C. Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it.
  • D. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
  • E. Run gcloud config set project $my_project to set the default project for gcloud CLI.

Google Cloud CLI를 사용하려면 Google 계정으로 인증해야한다.
프로젝트를 지정하지 않으면 gcloud compute instances list 실행 시 프로젝트를 명시적으로 지정해야 하므로, 기본 프로젝트를 설정해야한다.

정답 A, E



266. You are planning to migrate your on-premises data to Google Cloud. The data includes:

• 200 TB of video files in SAN storage
• Data warehouse data stored on Amazon Redshift
• 20 GB of PNG files stored on an S3 bucket

You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do?

  • A. Use gcloud storage for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
  • B. Use Transfer Appliance for the videos, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
  • C. Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
  • D. Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.

Transfer Appliance는 Google에서 제공하는 하드웨어 솔루션으로, 대량의 데이터를 전송합니다. 인터넷을 통해 데이터를 업로드하는 것이 너무 느리거나 실행 불가능한 시나리오에 적합하다.
BigQuery Data Transfer Service는 Amazon Redshift에서 데이터를 BigQuery로 전송하는 Google 권장 서비스이다.
Storage Transfer Service는 클라우드 간 전송에 적합한 것으로 S3에서 Cloud Storage로 데이터를 이동할 때 사용한다.

Cloud Data Fusion은 ETL 파이프라인을 구축하는 서비스로, 파일 단순 전송에는 적절하지 않다.

정답 B



267. You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?

  • A. Deploy the application on GKE Autopilot.
  • B. Deploy the application on Cloud Run.
  • C. Deploy the application on GKE Standard.
  • D. Deploy the application on Cloud Functions.

GKE Autopilot은 Google이 노드 관리를 자동화하며, 사용자는 컨테이너 워크로드만 관리하면된다.

Cloud Functions는 서버리스 함수 실행 환경이다.

정답 A


268. Your team is building a website that handles votes from a large user population. The incoming votes will arrive at various rates. You want to optimize the storage and processing of the votes. What should you do?

  • A. Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud Functions instance to periodically process the votes.
  • B. Use a dedicated instance to process the incoming votes. Send the votes directly to this instance.
  • C. Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a batch at the end of the day.
  • D. Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes.

Pub/Sub은 대량의 이벤트 스트림을 수집하고 병렬 처리하는 데 적합하고, Cloud Functions를 트리거로 설정하면, 서버리스 방식으로 이벤트 기반 처리가 가능하다.
추가적으로 스케일링이 자동으로 이루어지므로, 트래픽이 급증해도 문제없이 처리 가능하다.

정답 D



269. You are deploying an application on Google Cloud that requires a relational database for storage. To satisfy your company’s security policies, your application must connect to your database through an encrypted and authenticated connection that requires minimal management and integrates with Identity and Access Management (IAM). What should you do?

  • A. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure a database user and password.
  • B. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure IAM database authentication.
  • C. Deploy a Cloud SQL database and configure IAM database authentication. Access the database through the Cloud SQL Auth Proxy.
  • D. Deploy a Cloud SQL database and configure a database user and password. Access the database through the Cloud SQL Auth Proxy.

SSL/TLS 클라이언트 인증서는 IAM 기반 인증보다 관리가 번거롭고 보안성이 낮다.

IAM 데이터베이스 인증은 Cloud SQL에 대한 액세스를 IAM 기반으로 관리할 수 있다.
Cloud SQL Auth Proxy를 사용하면 인증 및 트래픽이 자동으로 암호화되고, 보안성을 극대화하면서 관리 부담을 최소화할 수 있는 Google 권장 방법이다.

정답 C


270. You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b with VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API services are deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these Google Cloud projects. You also want to follow Google-recommended practices. What should you do?

  • A. Create an OpenVPN connection between vpc-a and vpc-b.
  • B. Create VPC Network Peering between vpc-a and vpc-b.
  • C. Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b.
  • D. Configure a Cloud Interconnect connection between vpc-a and vpc-b.

VPC Network Peering은 Google Cloud 내에서 프로젝트 간 VPC 네트워크를 연결하는 가장 비용 효율적이고 간단한 방법이다. 이는 VPC 간 내부 IP 주소를 그대로 사용할 수 있어 성능이 뛰어나고, 추가적인 VPN 비용이 발생하지 않는다.

Cloud Router는 동적 라우팅을 위해 필요하다.

Cloud Interconnect는 온프레미스와 Google Cloud 간의 연결을 위한 것이다.

정답 B

271. Your company is running a critical workload on a single Compute Engine VM instance. Your company's disaster recovery policies require you to back up the entire instance’s disk data every day. The backups must be retained for 7 days. You must configure a backup solution that complies with your company’s security policies and requires minimal setup and configuration. What should you do?

  • A. Configure the instance to use persistent disk asynchronous replication.
  • B. Configure daily scheduled persistent disk snapshots with a retention period of 7 days.
  • C. Configure Cloud Scheduler to trigger a Cloud Function each day that creates a new machine image and deletes machine images that are older than 7 days.
  • D. Configure a bash script using gsutil to run daily through a cron job. Copy the disk’s files to a Cloud Storage bucket with archive storage class and an object lifecycle rule to delete the objects after 7 days.

Persistent Disk 비동기 복제는 백업이 아니라 장애 조치(failover)를 위한 기능이다.

Persistent Disk Snapshots은 Google에서 제공하는 가장 간편한 백업 방법이며, 스케줄링 및 보존 기간 설정이 가능하다.

정답 B

272. Your company requires that Google Cloud products are created with a specific configuration to comply with your company’s security policies. You need to implement a mechanism that will allow software engineers at your company to deploy and update Google Cloud products in a preconfigured and approved manner. What should you do?

  • A. Create Java packages that utilize the Google Cloud Client Libraries for Java to configure Google Cloud products. Store and share the packages in a source code repository.
  • B. Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud products. Store and share the bash scripts in a source code repository.
  • C. Use the Google Cloud APIs by using curl to configure Google Cloud products. Store and share the curl commands in a source code repository.
  • D. Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google Cloud products. Store and share the modules in a source code repository.

curl은 수동 명령 실행 방식으로 관리가 어렵고 확장성이 떨어진다.

Terraform은 Google Cloud에서 권장하는 Infrastructure as Code (IaC) 솔루션으로, 구성 및 배포를 자동화할 수 있다.
그리고 미리 정의된 모듈을 사용하면 보안 정책을 쉽게 적용하고, 개발자가 일관된 방식으로 배포 가능하다.

정답 D



273. You are a Google Cloud organization administrator. You need to configure organization policies and log sinks on Google Cloud projects that cannot be removed by project users to comply with your company's security policies. The security policies are different for each company department. Each company department has a user with the Project Owner role assigned to their projects. What should you do?

  • A. Use a standard naming convention for projects that includes the department name. Configure organization policies on the organization and log sinks on the projects.
  • B. Use a standard naming convention for projects that includes the department name. Configure both organization policies and log sinks on the projects.
  • C. Organize projects under folders for each department. Configure both organization policies and log sinks on the folders.
  • D. Organize projects under folders for each department. Configure organization policies on the organization and log sinks on the folders.

조직 내에서 프로젝트를 폴더로 분류하는 것이 Google 권장 방식으로 폴더 단위로 조직 정책 및 로그 싱크를 설정하면 개별 프로젝트에서 변경할 수 없다.
따라서 보안 정책을 유지하면서 프로젝트 오너가 정책을 제거할 수 없도록 보호 가능하다.

정답 C



274. You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?

  • A. Use SSL proxy load balancing for the MIG and an A record in your DNS private zone with the load balancer's IP address.
  • B. Use SSL proxy load balancing for the MIG and a CNAME record in your DNS public zone with the load balancer’s IP address.
  • C. Use HTTP(S) load balancing for the MIG and a CNAME record in your DNS private zone with the load balancer’s IP address.
  • D. Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer’s IP address.

SSL Proxy는 HTTPS가 아닌 TCP/SSL 로드 밸런싱을 제공한다.

HTTP(S) Load Balancer는 글로벌 로드 밸런싱을 제공하고 SSL/TLS를 적용할 수 있어 보안 및 가용성을 극대화할 수 있다.
A 레코드는 도메인 이름을 Load Balancer의 공인 IP 주소에 매핑하는 표준 방식이다.

정답 D

275. You have several hundred microservice applications running in a Google Kubernetes Engine (GKE) cluster. Each microservice is a deployment with resource limits configured for each container in the deployment. You've observed that the resource limits for memory and CPU are not appropriately set for many of the microservices. You want to ensure that each microservice has right sized limits for memory and CPU. What should you do?

  • A. Configure a Vertical Pod Autoscaler for each microservice.
  • B. Modify the cluster's node pool machine type and choose a machine type with more memory and CPU.
  • C. Configure a Horizontal Pod Autoscaler for each microservice.
  • D. Configure GKE cluster autoscaling.

Vertical Pod Autoscaler (VPA)는 각 Pod의 CPU 및 메모리 리소스 요청을 자동으로 조정하여 적절한 리소스 제한을 설정하는 기능이다.
GKE에서 개별 마이크로서비스가 올바른 리소스 제한을 가질 수 있도록 자동으로 조정한다.
Horizontal Pod Autoscaler (HPA)는 Pod 수를 조정하지만 개별 Pod의 리소스를 최적화하지 않는다.

GKE cluster autoscaling은 클러스터 크기 조정은 가능하지만, 개별 마이크로서비스의 리소스 최적화는 하지 않는다.

정답 A


276. Your company uses BigQuery to store and analyze data. Upon submitting your query in BigQuery, the query fails with a quotaExceeded error. You need to diagnose the issue causing the error. What should you do? (Choose two.)

  • A. Use BigQuery BI Engine to analyze the issue.
  • B. Use the INFORMATION_SCHEMA views to analyze the underlying issue.
  • C. Configure Cloud Trace to analyze the issue.
  • D. Search errors in Cloud Audit Logs to analyze the issue.
  • E. View errors in Cloud Monitoring to analyze the issue.

BI Engine은 데이터 쿼리 속도를 최적화하는 기능이지, 할당량 문제 해결과는 관련이 없다.

INFORMATION_SCHEMA 뷰를 사용하면 쿼리 사용량 및 할당량을 분석할 수 있다.

Cloud Trace는 애플리케이션 성능 분석 도구로, BigQuery 쿼리 문제를 직접 해결할 수 없다.
Cloud Audit Logs를 확인하면 할당량 초과 문제를 정확히 파악할 수 있다.

Cloud Monitoring은 리소스 모니터링에 적합하지만, BigQuery 할당량 초과의 원인을 분석하는 데는 부적절하다.

정답  B, D



277. Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do?

  • A. Deploy the application on a managed instance group and configure autoscaling.
  • B. Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.
  • C. Deploy the application on Cloud Functions and configure the maximum number instances.
  • D. Deploy the application on Cloud Run and configure autoscaling.

Managed Instance Group (MIG) 는 VM 기반 애플리케이션을 실행하면서 자동 확장(Auto Scaling)을 지원하는 최적의 솔루션이다.

Cloud Run은 컨테이너 기반이며, VM에서 직접 실행해야 하는 요구사항을 충족하지 못한다.

정답 A

278. Your web application is hosted on Cloud Run and needs to query a Cloud SQL database. Every morning during a traffic spike, you notice API quota errors in Cloud SQL logs. The project has already reached the maximum API quota. You want to make a configuration change to mitigate the issue. What should you do?

  • A. Modify the minimum number of Cloud Run instances.
  • B. Use traffic splitting.
  • C. Modify the maximum number of Cloud Run instances.
  • D. Set a minimum concurrent requests environment variable for the application.

Cloud Run 서비스는 사용하지 않을 때 인스턴스의 수가 0이고, 새로운 요청이 도착하면 새 인스턴스가 시작되어 콜드 스타트가 발생한다.이 콜드 스타트 ​​중에 애플리케이션을 초기화하고 Cloud SQL 데이터베이스에 연결하기 위해 여러 API 호출이 수행될 수 있다. 이때 트래픽이 갑자기 급증하면 많은 수의 콜드 스타트가 동시에 발생하여 Cloud SQL API 할당량을 초과할 수 있다.

정답 A

279. You need to deploy a single stateless web application with a web interface and multiple endpoints. For security reasons, the web application must be reachable from an internal IP address from your company's private VPC and on-premises network. You also need to update the web application multiple times per day with minimal effort and want to manage a minimal amount of cloud infrastructure. What should you do?

  • A. Deploy the web application on Google Kubernetes Engine standard edition with an internal ingress.
  • B. Deploy the web application on Cloud Run with Private Google Access configured.
  • C. Deploy the web application on Cloud Run with Private Service Connect configured.
  • D. Deploy the web application to GKE Autopilot with Private Google Access configured.

Private Google Access는 Google API 및 서비스에 대한 내부 액세스를 제공하는 것이다.

Cloud Run을 Private Service Connect(PSC)와 함께 사용하면, 내부 IP 주소를 통해 VPC 및 온프레미스에서 접근 가능하다.

정답 C

280. You use Cloud Logging to capture application logs. You now need to use SQL to analyze the application logs in Cloud Logging, and you want to follow Google-recommended practices. What should you do?

  • A. Develop SQL queries by using Gemini for Google Cloud.
  • B. Enable Log Analytics for the log bucket and create a linked dataset in BigQuery.
  • C. Create a schema for the storage bucket and run SQL queries for the data in the bucket.
  • D. Export logs to a storage bucket and create an external view in BigQuery.

Cloud Logging에서 Log Analytics를 활성화하면 BigQuery를 통해 SQL 쿼리를 실행할 수 있다.

Cloud Storage는 SQL 쿼리를 직접 실행할 수 없다.

로그를 Storage Bucket으로 내보내는 것은 불필요한 데이터 이동이 발생한다.

정답 B

281. You need to deploy a third-party software application onto a single Compute Engine VM instance. The application requires the highest speed read and write disk access for the internal database. You need to ensure the instance will recover on failure. What should you do?

  • A. Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateful managed instance group.
  • B. Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateless managed instance group.
  • C. Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateful managed instance group.
  • D. Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateless managed instance group.

Hyperdisk Extreme은 Google Cloud에서 가장 빠른 디스크 옵션으로, 높은 읽기/쓰기 속도를 제공한다.
Stateful Managed Instance Group (MIG) 를 사용하면, 인스턴스 장애 발생 시 동일한 디스크 상태로 복구 가능하다.

정답 C

 

282. You have a VM instance running in a VPC with single-stack subnets. You need to ensure that the VM instance has a fixed IP address so that other services hosted in the same VPC can communicate with the VM. You want to follow Google-recommended practices while minimizing cost. What should you do?

  • A. Promote the existing IP address of the VM to become a static external IP address.
  • B. Promote the existing IP address of the VM to become a static internal IP address.
  • C. Reserve a new static external IPv6 address and assign the new IP address to the VM.
  • D. Reserve a new static external IP address and assign the new IP address to the VM.

VPC 내부에서만 통신해야 하므로 Static Internal IP를 설정하면 되고, 기존 VM의 내부 IP를 Static Internal IP로 승격하면 IP가 변경되지 않는다.

정답 B



283. Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster in us-central1, has gained popularity. You are now ready to make the application generally available. You need to deploy the application to production while ensuring high availability and resilience. You also want to follow Google-recommended practices. What should you do?

  • A. Use the gcloud container clusters create command with the options --enable-multi-networking and --enable-autoscaling to create an autoscaling zonal cluster and deploy the application to it.
  • B. Use the gcloud container clusters create-auto command to create an autopilot cluster and deploy the application to it.
  • C. Use the gcloud container clusters update command with the option --region us-central1 to update the cluster and deploy the application to it.
  • D. Use the gcloud container clusters update command with the option --node-locations us-central1-a,us-central1-b to update the cluster and deploy the application to the nodes.

zonal 클러스터는 HA(고가용성)를 보장하지 않는다.

Autopilot 클러스터는 구글이 관리하는 GKE 배포 방식으로, 자동 스케일링 및 최적화가 포함되어있다.
운영 부담이 적고, 비용 효율적이며, 고가용성 제공 가능하다.

노드 위치만 변경하는 것은 HA 보장을 위한 최적의 방법이 아니다.

정답 B