Search results for GCP Infra
CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale
Clean up of over permissioned IAM accounts on GCP infra in an automated way CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework. Key features Discover what makes CureIAM scalable and production grade. Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it. Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach. Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note. Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it. Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes. Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file. Usage Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud. # Install necessary dependencies$ pip install -r requirements.txt# Run CureIAM now$ python -m CureIAM -n# Run CureIAM process as schedular$ python -m CureIAM# Check CureIAM help$ python -m CureIAM --help CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment. # Build docker image from dockerfile$ docker build -t cureiam . # Run the image, as schedular$ docker run -d cureiam # Run the image now$ docker run -f cureiam -m cureiam -n Config CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler. Let's configure first section, which is logging configuration and scheduler configuration. logger: version: 1 disable_existing_loggers: false formatters: verysimple: format: >- [%(process)s] %(name)s:%(lineno)d - %(message)s datefmt: "%Y-%m-%d %H:%M:%S" handlers: rich_console: class: rich.logging.RichHandler formatter: verysimple file: class: logging.handlers.TimedRotatingFileHandler formatter: simple filename: /tmp/CureIAM.log when: midnight encoding: utf8 backupCount: 5 loggers: adal-python: level: INFO root: level: INFO handlers: - rich_console - file schedule: "16:00" This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins. plugins: gcpCloud: plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations params: key_file_path: cureiamSA.json filestore: plugin: CureIAM.plugins.files.filestore.FileStore gcpIamProcessor: plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor params: mode_scan: true mode_enforce: true enforcer: key_file_path: cureiamSA.json allowlist_projects: - alpha blocklist_projects: - beta blocklist_accounts: - foo@bar.com allowlist_account_types: - user - group - serviceAccount blocklist_account_types: - None min_safe_to_apply_score_user: 0 min_safe_to_apply_scor e_group: 0 min_safe_to_apply_score_SA: 50 esstore: plugin: CureIAM.plugins.elastic.esstore.EsStore params: # Change http to https later if your elastic are using https scheme: http host: es-host.com port: 9200 index: cureiam-stg username: security password: securepassword Each of these plugins declaration has to be of this form: plugins: <plugin-name>: plugin: <class-name-as-python-path> params: param1: val1 param2: val2 For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this: audits: IAMAudit: clouds: - gcpCloud processors: - gcpIamProcessor stores: - filestore - esstore Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step. Tell CureIAM to run the Audits defined in previous step. run: - IAMAudits And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework. Dashboard The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana. Contribute [Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR! Credits Gojek Product Security Team Demo <> ============= NEW UPDATES May 2023 0.2.0 Refactoring Breaking down the large code into multiple small function Moving all plugins into plugins folder: Esstore, files, Cloud and GCP. Adding fixes into zero divide issues Migration to new major version of elastic Change configuration in CureIAM.yaml file Tested in python version 3.9.X Library Updates Adding the version in library to avoid any back compatibility issues. Elastic==8.7.0 # previously 7.17.9 elasticsearch==8.7.0 google-api-python-client==2.86.0 PyYAML==6.0 schedule==1.2.0 rich==13.3.5 Docker Files Adding Docker Compose for local Elastic and Kibana in elastic Adding .env-ex change .env-ex to .env to before running the docker Running docker compose: docker-compose -f docker_compose_es.yaml up Features Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running. mode_scan: true mode_enforce: false Turn off the email function temporarily. Download CureIAM
CureIAM - Clean Accounts Over Permissions In GCP Infra...
Clean up of over permissioned IAM accounts on GCP infra in an automated way CureIAM...
Source: KitPloit
Cross-Tenant Request Forgery Attack in Multi-Tenancy Environments - Albert Yu & Alan Bishop
Unveiling the Cross-Tenant Request Forgery Attack in Multi-Tenancy Environments Description To build a SaaS application platform, most platform owners rely on integrations with more popular ecosystems such as Microsoft Azure, Google Workspace, Okta, Github, Atlassian Jira, etc. The industry has moved towards open standards like OAuth for access delegation, but there are several flavors (e.g. 3LO, 2LO, SPA) of OAuth and each flavor works in different scenarios. Some API access mandates a particular flavor of OAuth. What's adding to the complexity here is that most platform owners are going to support more than one customer, aka multi-tenancy. Our research has uncovered significant challenges and potential security vulnerabilities that arise when implementing 2LO (either via Client Credential or JWT bearer) in a multi-tenancy environment. Once exploited, attackers can compromise another tenant who co-exists in the same platform and get their data without getting noticed. Given the difficulty of implementing the solution correctly with the right usability, we believe there is a lot of misimplementation lurking wildly. We call the attack "Cross-Tenant Request Forgery". Our goal is to make developers aware of this kind of vulnerability, and discuss the remediations in different scenarios. And, for some cases, the remediations are vendor specific. Albert Yu Co-Founder and CTO, Anzenna Inc. Albert has been a lifelong security practitioner and has been building security infrastructure for 20+ years. Most recently Albert was building GCP security infrastructure at Google. Before Google, Albert was at Atlassian and Yahoo! (US), building security platforms and infrastructures. Prior to that, he built the security engineering program for Yahoo! (APAC). Albert has a PhD in Computer Science from the University of Hong Kong. Now Albert is a co-founder of Anzenna Inc, aiming at making security as a habit for employees. Alan Bishop lead software developer, Anzenna, Inc Alan Bishop is a lead software developer at Anzenna, Inc., a startup focused on scaling security in the enterprise across the entire organization. Although primarily a backend software engineer these days, Alan has been finding and reporting security bugs since the 1980s. He is mostly focused on web application security, with extra attention on authentication and identity issues. - Managed by the OWASP® Foundation https://owasp.org/
Alan Bishop Bishop Unveiling Environments Description Forgery Attack

Cross-Tenant Request Forgery Attack in Multi-Tenancy...
Unveiling the Cross-Tenant Request Forgery Attack in Multi-Tenancy Environments
Description
To...
Source: OWASP Foundation
Just-In-Time Access in Google Cloud: Enhancing Security with Real-time Alerts
Just In Time AlertsIn today's digital age, as organizations increasingly migrate their operations to the cloud, the importance of robust security mechanisms cannot be overstated. One such mechanism that has gained significant traction in cloud environments like Google Cloud is Just-In-Time (JIT) access. But what is JIT, and why is it so crucial in cloud security? Let's delve deeper.Understanding Just-In-Time (JIT) AccessJIT access refers to the practice of granting permissions to resources only when they are needed and for the shortest duration necessary. Instead of having persistent, always-on permissions, users request access when required, and once granted, the access expires after a set duration.In cloud security, Just-In-Time (JIT) offers multiple benefits:Minimized Attack Surface: By narrowing the window of opportunity for potential attackers, the risk of unauthorized access or breaches is significantly diminished.Reduced Insider Threats: Even well-intentioned employees can inadvertently pose security risks. JIT ensures they only have access when necessary, minimizing the risk of unintentional mishaps.Compliance and Governance: For organizations bound by regulatory requirements, JIT offers a means to demonstrate stringent access controls.The Necessity of Recording and Checking JIT Elevation RequestsWhile JIT serves as a proactive security measure, it's equally vital to have reactive strategies in place. Logging and auditing every JIT elevation request ensures:Accountability: Every access request can be traced back to an individual, fostering a culture of responsibility.Forensics: In the unfortunate event of a security breach, logs provide invaluable insights into the sequence of events, facilitating quicker resolution and mitigation.Continuous Refinement: Periodic review of access logs can reveal patterns, providing insights to refine and tighten access policies further. Does an employee have a JIT role they are not using? Please remove it.The Power of Real-time AlertsThe true potential of JIT is realized when combined with real-time alerts. Instead of merely logging access, real-time notifications ensure that relevant stakeholders are instantly informed of JIT elevation requests. This real-time feedback loop is crucial for:Timely Action: In the event of a suspicious access attempt, teams can respond in real time.Enhanced Transparency: Continuous notifications ensure that stakeholders are always in the loop, fostering an environment of transparency and trust.Receiving Timely Alerts on JIT Elevation Requests in Google CloudGoogle Cloud and the provided GitHub repository resources below do not have Just-In-Time (JIT) alerts as a built-in feature. However, Google Cloud Platform (GCP) provides tools that make integration easy. To take full advantage of these features, you must create alerts for real-time JIT elevation requests. Here's a step-by-step guide on how to do this, including how to incorporate Slack notifications:Setting Up the Alerting InfrastructureLog Sink: A dedicated log sink was configured within Google Cloud to monitor the project overseeing JIT software. This sink captures all events related to JIT elevation requests.Pub/Sub Topic: Events from the log sink are routed to a Pub/Sub topic, a real-time messaging service in Google Cloud.Cloud Function with EventArc: An automated function triggered by Google EventArc processes events as they arrive at the Pub/Sub topic.Slack Integration: The function parses the incoming data and sends relevant alerts to a designated Slack channel, ensuring stakeholders are promptly notified.The CodeThe code below can be placed into a v2 GCP Cloud function running Python 3.11import base64import jsonimport requestsimport functions_frameworkSLACK_URL = "https://YourSlackURL"SLACK_CHANNEL = "EndKey"@functions_framework.cloud_eventdef get_pub(cloud_event): raw_data = base64.b64decode(cloud_event.data["message"]["data"]) parsed_data = parse_pubsub_message(raw_data) send_slack(parsed_data)def parse_pubsub_message(data): message = json.loads(data.decode('utf-8')) labels = message.get('labels', {}) return { 'justification': labels.get('justification'), 'project_id': labels.get('project_id'), 'role': labels.get('role'), 'user': labels.get('user'), 'textPayload': message.get('textPayload') }def send_slack(parsed_data): slack_message = { "blocks": [ { "type": "section", "text": { "type": "mrkdwn", "text": f"*User:* {parsed_data['user']}n*Role:* {parsed_data['role']}n*Project:* {parsed_data['project_id']}n*Justification:* {parsed_data['justification']}" } }, { "type": "section", "text": { "type": "mrkdwn", "text": f"*Description:* ```{parsed_data['textPayload']}```" } } ], "channel": SLACK_CHANNEL, "username": "JIT Escalation", "icon_emoji": ":unlock:", "link_names": True } try: response = requests.post( url=SLACK_URL, headers={"Content-Type": "application/json"}, data=json.dumps(slack_message) ) except requests.exceptions.RequestException: print('HTTP Request failed')For production deployments, ensure placeholders like “YourSlackURL” and “EndKey” are replaced with environment variables to maintain the confidentiality of sensitive data.ResourcesGoogle Cloud's Just-In-Time (JIT) access, paired with instant alerts, provides a robust security solution. This system immediately tells organizations about access requests. They can respond quickly to keep their cloud infrastructure safe and within rules. As your cloud environment changes, taking quick, forward-thinking steps will be vital in maintaining strong security.Below is where you can find Google's documentation on implementing JIT and the GitHub repository mentioned in this article.Manage just-in-time privileged access to projects | Cloud Architecture Center | Google CloudGitHub - GoogleCloudPlatform/jit-access: Just-In-Time Access is a self-service web application that lets you manage just-in-time privileged access to Google Cloud projects. JIT Access runs on App Engine and Cloud Run.Just-In-Time Access in Google Cloud: Enhancing Security with Real-time Alerts was originally published in InfoSec Write-ups on Medium, where people are continuing the conversation by highlighting and responding to this story.

Just-In-Time Access in Google Cloud: Enhancing Security...
Just In Time AlertsIn today's digital age, as organizations increasingly migrate...
Source: InfoSec Write-ups
A New Perspective on Resource-Level Cloud Forensics
AWS classifies cloud incidents across three domains: Service, Infrastructure, and Application. There has been much previous discussion across the Service and Application domains, see for example the excellent SANS DFIR 2022 Keynote. This talk will focus on the unique challenges and opportunities of responding to incidents in the Infrastructure domain. Cloud Service Providers, such as AWS, GCP and Azure, often introduce artifacts of forensic value when developing features for the automation and monitoring of resources. Typically, these artifacts are undocumented and exist purely for the provider's own troubleshooting, but they also provide valuable insight to an investigator analyzing malicious activity on a system. Frequently, this insight surpasses that of “provider-supported” forensic data sources. Most of the discourse around performing forensics in the cloud focuses on provider-level logging. While this is undoubtedly useful, practitioners understand that resource-level forensic analysis is crucial when responding to incidents affecting cloud infrastructure. And much of this knowledge remains opaque and undocumented. In this presentation, Chris Doman, CTO of Cado Security will present novel research on undocumented forensic artifacts from cloud service provider-specific operating systems and tools. He will provide the audience with an overview of forensic techniques across cloud computing and serverless environments. He will also discuss native operating system artifacts, contrast them with their cloud equivalents, and consider their usefulness in the context of the cloud. Attendees can expect to gain a unique perspective on resource-level cloud forensics and should leave the talk with a host of new data sources and knowledge for performing forensic analysis of cloud resources. SANS DFIR Summit 2023 Speaker: Chris Doman, Co-founder, Cado Security View upcoming Summits: http://www.sans.org/u/DuS

A New Perspective on Resource-Level Cloud Forensics...
AWS classifies cloud incidents across three domains: Service, Infrastructure, and...
Source: SANS Digital Forensics and Incident Response
ZeusCloud - Open Source Cloud Security
ZeusCloud is an open source cloud security platform. Discover, prioritize, and remediate your risks in the cloud. Build an asset inventory of your AWS accounts. Discover attack paths based on public exposure, IAM, vulnerabilities, and more. Prioritize findings with graphical context. Remediate findings with step by step instructions. Customize security and compliance controls to fit your needs. Meet compliance standards PCI DSS, CIS, SOC 2, and more! Quick Start Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git Run: cd ZeusCloud && make quick-deploy Visit http://localhost:80 Check out our Get Started guide for more details. A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access! Sandbox Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud! Features Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment. Graphical Context - Understand context behind security findings with graphical visualizations. Access Explorer - Visualize who has access to what with an IAM visualization engine. Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments. Configurability - Configure which security rules are active, which alerts should be muted, and more. Security as Code - Modify rules or write your own with our extensible security as code approach. Remediation - Follow step by step guides to remediate security findings. Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more! Why ZeusCloud? Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc. We had to take action. We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks Best of all, you can use ZeusCloud for free! Future Roadmap Integrations with vulnerability scanners Integrations with secret scanners Shift-left: Remediate risks earlier in the SDLC with context from your deployments Support for Azure and GCP environments Contributing We love contributions of all sizes. What would be most helpful first: Please give us feedback in our Slack. Open a PR (see our instructions below on developing ZeusCloud locally) Submit a feature request or bug report through Github Issues. Development Run containers in development mode: cd frontend && yarn && cd -docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build Reset neo4j and/or postgres data with the following: rm -rf .compose/neo4jrm -rf .compose/postgres To develop on frontend, make the the code changes and save. To develop on backend, run docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend To access the UI, go to: http://localhost:80. Security Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io. Open-source vs. cloud-hosted This repo is freely available under the Apache 2.0 license. We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information! Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README! Download ZeusCloud
ZeusCloud - Open Source Cloud Security
ZeusCloud is an open source cloud security platform. Discover, prioritize,...
Source: KitPloit