Compliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transferCompliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transfer

What Compliance Automation Actually Looks Like Inside a Large Tech Company

9 min read

Compliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transfer requirements, CCPA obligations, SOC audits, each one generates its own documentation burden, and the teams responsible for meeting those obligations often do so through manual processes that consume hundreds of hours per audit cycle. Sumit Sharma has spent the last several years building automation systems to replace manual workflows. His work has covered automated control monitoring and evidence generation, third-party risk assessment tooling used by tens of thousands of employees, and security awareness training platforms serving over 650,000 users globally. He has also contributed to the professional knowledge base through ISACA Journal publications, peer review work for the Cloud Security Alliance and IEEE and speaking engagements on third-party risk management and AI governance. We spoke with him about what compliance automation looks like when it’s actually running at scale, where AI fits into risk assessment today, and what most vendors get wrong about how these programs operate inside large companies.

You reduced manual evidence preparation from over 100 hours to minutes through automation. What did that project actually involve day to day, and what broke along the way?

This project involved three components: continuous control monitors, a failure escalation mechanism, and auto evidence generation. The generated evidence was available within a system across each control for auditors to readily consume. This reduced the manual overhead on business and technology teams to provide evidence manually to audits for specific samples during each audit cycle. With such innovative systems, issues can always arise. A couple of examples of what broke along the way is, the monitoring logic was not wrongly configured, or the data source selection is wrong that lead to inaccurate monitoring or evidence generation.

Compliance teams at large organizations tend to resist new tooling. When you rolled out a portal overhaul to 80,000 employees, how did you get people to use it?

The portal that was overhauled was more from the user interface (UI) standpoint. Before we made such changes, we had some internal metrics to begin with, for example, the customer satisfaction score of the tool was lower than the expected baseline. Also, we had been seeing a lot of internal user tickets getting filed, complaining about UI issues, slowness, and difficulty moving between screens within the tool, which seemed to us to make a change to the UI to better guide the users. Hence in a way we heard user feedback and acted on it. Before the roll out of a new portal, we met with few users/teams who used the tool more frequently than others. We also heard feedback from upstream and downstream system users that gave us additional perspective,which helped us to focus on our requirements in the right direction and make improvements on the key UI components. Regarding the adoption, we started making internal communication on what changes we plan to bring with this new UI and when to avoid any surprises. We also invited some users for the user acceptance testing to get their firsthand feedback. Upon roll out, we also had videos uploaded on the portal providing a walkthrough of all new features for the users.

Your ISACA writing covers AI and ML applications in third-party risk assessment. Which use cases are organizations actually deploying today versus talking about deploying?

I have seen organizations automating manual workflows, such as sending reminders, and alsobuilding a risk assessment logic that rates a third party based on certain criteria. Additionally, I think certain organizations are also trying to integrate risk reviews with other reviews within the third-party life cycle to further create a seamless process for internal and external users.

You’ve peer reviewed AI governance work for the Cloud Security Alliance and IEEE. What patterns do you notice in how practitioners are thinking about AI controls right now?

What I have been noticing is that practitioners are approaching it more as a risk-centric view of AI, which means they are looking at it as a new cybersecurity or compliance surface instead of a new innovation. They are trying to push for auditable controls that are mapped across the entire AI lifecycle as opposed to high level ethics statements. Also, there is a strong demand for crossframework alignment (NIST, ISO, EU AI Act) to reduce fragmentation. Overall, having an AI governance must be adopted for a safer and faster adoption in the AI development process, where AU governance is just not a check box exercise but something that can enable trust, innovation and speed. This can also be a key differentiator for the organizations who are either building it or adopting it.

I would answer this question a little differently. Every third party requires sign off from legal, procurement, privacy and security and this is the right industry practice that regulators want to see. Your question seems to be more around how you run a project with so many stakeholders. When you work with multiple crossfunctional stakeholders, a project’s problem statement and the impact that it will have play a key role here. A good to have project will not fly with so many stakeholders. Hence before starting or conceptualizing any project, one must clearly document the problem and the impact. Projects that are required to satisfy a regulatory requirement can have an easy sell because no one wants a company to get fines or a bad reputation due to noncompliance. However, projects which are aimed at performing a proactive risk mitigation can have a lot of push back. Potential reasons for this could be operational overheads on different functions, lack of resources to manage it. To address these concerns, one should identify key metrics for this group of stakeholders so they can easily quantify the impact it will have on their teams. This will help them better prepare for managing such operational constraints and also help you to align on the right timelines for a project go-live. And this way you do not run an about to be failed project, but a project well thought out, where requirements are clearly captured and though it takes time, but you deliver a highly impactful project.

Agentic AI creates risks that existing IT control frameworks weren’t built for. What should organizations be documenting or measuring that most aren’t yet?

As of now we are already seeing or reading about instances where AI agents can access sensitive data and coordinate with other agentic agents. I feel from an AI perspective these risks should be mapped to fundamental principles around internal control and governance. There are these traditional frameworks such as COSO that emphasize segregation of duties, monitoring and risk assessments that ensure reliable operations. However, they do not address novel risks introduced by Agentic AI, such as over-privileged access, inter-agent collusions, and prompt-basedmanipulations. There is a need for a control framework that integrates classical IT general control framework (ITGC) with emerging AI-specific considerations. Organizations must think about measuring the autonomy of such agents, including what they can access or invoke without human intervention. Model drifts will require tracking, and organizations must log different steps and action chains and feedback loops for agents. Also as mentioned before, such frameworksmust align with global regulatory requirements that will further give organizations an opportunity to rationalize their control environment as opposed to creating multiple similar controls to satisfy AI requirements for different country level or regional requirements.

You’ve worked in consulting, banking, and Big Tech. Which environment taught you the most about managing technology risk, and why?

I believe this is no one environment that has taught me the most about managing technology risk. Working in consulting gave me a broader exposure across industries, clients and local and global regulations. Banking taught me why technology risk is so important to manage in a financial institution due to the sheer fact that one systematic issue can have global ramification across the bank and can lead to financial loss, which can directly impact bank’s revenue and on top of itscustomer investments.  Coming into tech with all this experience helped me understand the leadership mindset on how much risk management is important to them and to the business. Unlike consulting or banking, big techs operate at a massive scale, velocity and global regulatory exposure. What I learned is though basic risk management fundamentals apply but they need to move beyond the point in time to more continuous risk monitoring. Also, the blast radius of a failure is immediate and user impacting, requiring risk-based decision making. Risk is important but it should not slow down the business. Also, since the risk dimension is more from managing user data and its impact, some regulatory requirements from other industries such as banks, may not apply. Hence risk management here needs to be tightly coupled with product design, data architecture and automation rather than being a mere policy. What I learned and I am still learning in tech is balancing innovation speed with regulatory obligations, which has definitely sharpened my ability to design projects that scale and are more preventive in nature than being reactive.

What do vendors selling AI compliance tools get wrong about how these programs actually run inside large companies?

I feel vendors selling AI compliance tools kind of underestimate how fragmented and complex large companies are. I feel there is an underlying assumption that there is one centralized governance body, when in practice this responsibility is split across multiple compliance teams with overlapping authority. There are tools built as dashboards and at times ignore that the actual compliance processes are executed through multiple systems, and development pipelines. Also,if the tools do not help in automating manual workflows such as evidence generation, it is difficult for them to scale. Another potential mistake I feel is thinking about compliance as a static checklist as opposed to being a continuous process that should incorporate regulatory updates and perform model changes. Tools are often created as ready for regulator reporting without thinking about the usability for engineers and program managers who will actually workon the tool day in day out. Also, bigger organizations care less about flashy risk scores and are more concerned about traceability, auditability and accountability when a regulator asks, “who approved it and why”.

Market Opportunity
LooksRare Logo
LooksRare Price(LOOKS)
$0.0006877
$0.0006877$0.0006877
-2.46%
USD
LooksRare (LOOKS) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Top Altcoins To Hold Before 2026 For Maximum ROI – One Is Under $1!

Top Altcoins To Hold Before 2026 For Maximum ROI – One Is Under $1!

BlockchainFX presale surges past $7.5M at $0.024 per token with 500x ROI potential, staking rewards, and BLOCK30 bonus still live — top altcoin to hold before 2026.
Share
Blockchainreporter2025/09/18 01:16
UBS CEO Targets Direct Crypto Access With “Fast Follower” Tokenization Strategy

UBS CEO Targets Direct Crypto Access With “Fast Follower” Tokenization Strategy

The tension in UBS’s latest strategy update is not between profit and innovation, but between speed and control. On February 4, 2026, as the bank reported a record
Share
Ethnews2026/02/05 04:56
Cryptos Signal Divergence Ahead of Fed Rate Decision

Cryptos Signal Divergence Ahead of Fed Rate Decision

The post Cryptos Signal Divergence Ahead of Fed Rate Decision appeared on BitcoinEthereumNews.com. Crypto assets send conflicting signals ahead of the Federal Reserve’s September rate decision. On-chain data reveals a clear decrease in Bitcoin and Ethereum flowing into centralized exchanges, but a sharp increase in altcoin inflows. The findings come from a Tuesday report by CryptoQuant, an on-chain data platform. The firm’s data shows a stark divergence in coin volume, which has been observed in movements onto centralized exchanges over the past few weeks. Bitcoin and Ethereum Inflows Drop to Multi-Month Lows Sponsored Sponsored Bitcoin has seen a dramatic drop in exchange inflows, with the 7-day moving average plummeting to 25,000 BTC, its lowest level in over a year. The average deposit per transaction has fallen to 0.57 BTC as of September. This suggests that smaller retail investors, rather than large-scale whales, are responsible for the recent cash-outs. Ethereum is showing a similar trend, with its daily exchange inflows decreasing to a two-month low. CryptoQuant reported that the 7-day moving average for ETH deposits on exchanges is around 783,000 ETH, the lowest in two months. Other Altcoins See Renewed Selling Pressure In contrast, other altcoin deposit activity on exchanges has surged. The number of altcoin deposit transactions on centralized exchanges was quite steady in May and June of this year, maintaining a 7-day moving average of about 20,000 to 30,000. Recently, however, that figure has jumped to 55,000 transactions. Altcoins: Exchange Inflow Transaction Count. Source: CryptoQuant CryptoQuant projects that altcoins, given their increased inflow activity, could face relatively higher selling pressure compared to BTC and ETH. Meanwhile, the balance of stablecoins on exchanges—a key indicator of potential buying pressure—has increased significantly. The report notes that the exchange USDT balance, around $273 million in April, grew to $379 million by August 31, marking a new yearly high. CryptoQuant interprets this surge as a reflection of…
Share
BitcoinEthereumNews2025/09/18 01:01