The Invisible Risk: AI-Assisted Coding
What Every Leader Needs to Know Before Someone on Your Team Writes Their First Script
With the push to drive efficiency and automate heretofore mundane or complex tasks, AI tools have made it possible for anyone to write functional code in minutes, no technical expertise required. These advances offer enormous potential, but they also introduce one of the most underestimated security risks in an organization today.
Consider this scenario: a finance analyst used an AI tool to write a script that pulled client data from three internal systems. It worked perfectly, and unknowingly sent unencrypted data to an external server… No one noticed for weeks. This highlights that non-technical employees are not the problem; the problem is giving them powerful tools without the scaffolding to use them safely.
Writing Code You Don't Understand
When a business user asks an AI coding assistant to "write a script to pull data from an Application Programming Interface (API)," the AI assistant generates working code that exchanges data between systems in minutes. The code works. It runs. However, the user has no visibility into what the code is actually doing, what data it touches, where it sends information, what level of data encryption is applied, or what vulnerabilities it could possibly introduce.
This is not incompetence or ignorance, simply a fundamental knowledge and trust gap. With each instance where your employee reviews the AI results and finds them accurate, they will feel an increasing level of comfort, hence stop being curious or skeptical. The resulting code may appear well-documented, but verifying its safety requires developer-level expertise in the specific language, a review step that rarely happens when the person generating the code works in finance, operations, or accounting.
Open-Source Libraries As An Attack Vector
AI coding tools routinely pull from open-source libraries to save time, leveraging code written to perform a specific task. However, many repositories are compromised, abandoned, or deliberately malicious — and the AI has no way to know.
A high-profile real-world example is LiteLLM, a popular Python library for connecting to APIs with AI. Security researchers identified that versions of this library were compromised to introduce unexpected network calls not part of the original codebase. A user installing it based on an AI recommendation would have no reason to suspect anything was wrong. The library appeared legitimate and widely used, yet it contained malicious code buried in a dependency.
To that end, between 2023 and 2025, over seven hundred malicious packages were removed from the Python Package Index (PyPI) alone. AI tools trained before removal may still recommend and use them.
GitHub repositories carry the same risk. Attackers publish packages with names nearly identical to trusted ones, for example, 'req-uests' instead of 'requests', designed to steal credentials or install malware. These compromised fake versions have been found in AI-recommended code.
What’s At Stake
When AI-generated code mishandles sensitive data, the consequences extend beyond a technical incident. Ramifications include regulatory penalties, breach notification obligations, litigation exposure, and reputational damage that could impact pending transactions or market value.
Your Organization Needs Guardrails
With the rapid pace of AI deployment, every Information Technology team should support innovation while defining appropriate guardrails.
- Pre-Approved Software Library List: In collaboration, the Cyber and Information Technology teams should maintain a vetted, regularly updated list of approved open-source libraries and packages. Any AI-generated code that references a dependency outside this allowlist should be automatically flagged or blocked.
- Safe Testing Space: Run, test, and assess AI-generated code in isolated sandbox environments before deploying it anywhere near production systems or sensitive data. This limits the risk and exposure if the code behaves unexpectedly or maliciously and allows teams to observe runtime behavior in a controlled setting. Most major cloud providers offer sandbox environments that can be quickly provisioned at minimal cost.
- Change Management Review Process: Before AI-generated code moves beyond the testing phase, it goes through a formal review by the technical team. This review should include a check against the Pre-Approved Software Library List." This is the same change management discipline that applies when updating a financial system or modifying a business workflow, nothing goes live without documented review and approval.
- Access Controls and Least-Privilege Principles: Restrict what AI coding tools can access. These tools often request broad permissions by default during setup, which is why access must be proactively managed. They should not have credentials to production databases, secrets managers, or internal APIs. Apply the principle of least privilege so that even if AI-generated code contains something harmful, it lacks the permissions to do real damage. Audit logs should track what the AI tools generate and what developers accept.
- Automatic Safety Scanning: Every piece of code the AI generates is automatically scanned with a Static Application Security Testing (SAST) tool for known risks before you can do anything with it. It is like a spell-checker, but for security, it runs in the background and catches problems you would not be expected to spot yourself.
- Awareness Training: Provide non-technical employees with focused, accessible training that covers three things: what they are allowed to use AI coding tools for, what risks exist (in plain language – not security jargon), and what to do when something seems off. They should understand that AI can confidently suggest dangerous code, that open-source libraries are not inherently safe, and that their role is to use the tools within defined boundaries.
The guardrails outlined above are not designed to be barriers to innovation, they are what make sustainable innovation possible. Non-technical employees are not the problem; the problem is giving them powerful tools without the scaffolding to use them safely.
This Week’s Sponsor
Newcomb & Boyd is a market leader delivering intuitive, high-performance engineering solutions that meet practical, environmental, and financial goals. Building on a 100-year legacy of innovation, we design spaces that inspire and perform. We are committed to decarbonization, sustainability, resiliency, and intelligent buildings. Learn more at www.newcomb-boyd.com.
Read Next
4/8/2026
Eating the Elephant: A Practical Path to Intelligent Buildings There is a joke I have used to guide my team for over thirty years: "How do you eat an elephant? One bite at a time." It's silly.
4/1/2026
Connected Tech Is Reshaping Commercial Real Estate Today’s CRE owners and operators face relentless pressure to improve efficiency, gain better insight and deliver stronger tenant and customer experiences.
3/18/2026
How Cove’s Unified AI-Powered Building Platform is Reshaping Property Operations Commercial real estate teams are being asked to deliver more with less. Tenants expect hospitality-level service. Ownership demands tighter reporting and risk control
The New Operating Model for Building Portfolios: Secure Visibility, Not Just More Data Commercial buildings are becoming software-defined assets. Systems that were once isolated – HVAC, lighting, metering, indoor air quality, and specialty plant – now generate continuous streams of operational data and can be accessed remotely in seconds.





