Skip to main content
+1.408.886.7177Free Trial
Hands typing on a keyboard.
Secure Workspace

Unlocking Secure Access to Enterprise Generative AI with Splashtop

By Yanlin Wang
7 minute read

Subscribe

NewsletterRSS Feed

Share This

The rise in popularity of ChatGPT and other forms of generative AI marks the beginnings of a transformative period for businesses and society. AI and machine learning (ML) have been enhancing businesses value for some time now but the sudden emergence of generative AI, including large language models (LLMs), is creating waves of change across all industries.

With the proliferation of SaaS offerings in the generative AI space — from Open AI’s ChatGPT to Anthropic’s Claude, Google’s Bard, and Microsoft Bing AI — companies of all sizes are rushing to establish guidelines and policies governing the utilization of these emerging services. At the same time, they are also grappling with the implications of this powerful yet nascent technology — how to mitigate the risk of exposure of private information, protect intellectual property, deal with potential misinformation or harmful content, and avoid unintended copyright infringement.

In this rapidly evolving landscape, one question has emerged as particularly pressing for organizations is how to safely, securely, and effectively use private data with generative AI. It’s precisely in this domain that our recently introduced Splashtop Secure Workspace platform can help.

Private AI Models for Enterprises

Certainly, there are commercial generative AI SaaS offerings that pledge not to use submitted data to improve their public models. However, not all companies are comfortable sending private data into a cloud service over which they have little control. Some organizations may be subject to sovereignty and regulatory compliance that preclude the use of services hosted in another region or country.

Other companies that would benefit from a private AI model are those that have the business need to train their own LLM models from scratch, or that need to protect data augmentations and optimized LLM models that have been pre-trained for specific tasks, such as customer support, finance advisory, and more.

Companies that build their own private generative AI and MLOps infrastructure are able to bring the power of these tools under their own IT control – whether on-prem or in a private cloud – allowing them to align with business needs, compliance and sovereignty requirements. This setup ensures that any sensitive and confidential data used to either train, fine-tune, or augment queries to LLMs are not exposed to external parties.

Securing Access to Private Applications, Including Large Language Models

A private setup needs protection, too. Everything in your machine learning operational pipeline, from the language models to the supporting databases and private data repositories requires secure provisioning, access, and management.

That's where Splashtop Secure Workspace comes in. We offer a simple yet secure way of controlling access to any private enterprise application, including private LLMs. Whether enterprise applications provide web-based interfaces or require custom desktop or mobile applications, our platform enables secure access from any client device, anywhere, and all without exposing network service ports to the outside world or requiring complicated networking setup with firewalls, bastion hosts, or VPNs.

Securing Access Across All Levels

Splashtop Secure Workspace supports a rich set of options to control access to any private enterprise application, including LLMs. Key capabilities include:

  • Single Sign-On (SSO) integration – Syncs with popular identity providers (including Google, Microsoft, Okta)

  • Multi-Factor Authentication (MFA) — Enables strong authentication controls for both front-end and back-end users across all SSO-enabled enterprise applications, including chat interface for a private enterprise LLM.

  • Conditional Access controls — Limits access to applications by checking key compliance and security criteria, including geolocation, and use of corporate-issued laptops. For instance, organizations adhering to data sovereignty rules may want to block access to the private LLM when employees are traveling abroad.

  • Privileged Access with delegated controls — Splashtop can control access by securely delegating privileged accounts or service accounts designated for managing critical subsystems and data in an enterprise application. For an LLM, this allows you to control and track access to the model itself, a vector or graph database, unstructured data repositories, or the ML pipeline, without unnecessary exposure to sensitive credentials.

  • Secure Third-Party Access — Our platform provides secure access sharing with third parties who may need temporary access to enterprise applications. This could include a private LLM solution provider needing secure access for on-site troubleshooting. Secure Workspace enables convenient access while fully recording sessions for auditing and compliance purposes.

  • Zero Trust Network Access (ZTNA) —In contrast to traditional VPNs that grant complete access to an entire network subnet, Splashtop Secure Workspace’s ZTNA approach grants pinpoint access to approved resources, ensuring a minimal attack surface. The “default-deny” approach provides reassurance where enterprise LLMs handle highly sensitive data.

  • API-Driven Automation — Companies committed to automation and DevOps workflows will appreciate the ability to tightly integrate and automate our Secure Workspace platform. Within a generative AI context, Splashtop Secure Workspace fits seamlessly into any MLOps pipeline, facilitating automated provisioning of access to key resources as well as automated configuration of the Secure Workspace platform itself, maximizing productivity and reducing human errors.

Now, we’ll demonstrate how to enable secure access to enterprise generative AI with our Secure Workspace. And, for following along, you'll end up with your own private LLM-based chatbot —your own personal ChatGPT.

A Practical Guide to Creating a Private LLM Chatbot

The concept of building your LLM chatbot might sound like a complex task. However, the reality is it's simpler than you think. Let's break it down using an open-source MLOps tool called dstack.

In this example, a single dstack run command allows you to provision and bring up LLM-As-Chatbot in your cloud environment.

The example will also simultaneously provision the Splashtop Secure Workspace connector to connect your private LLM chatbot to your Secure Workspace. You can then set up access control within your secure workspace just like any other application, utilizing strong authentication, SSO, MFA, and a rich set of conditional access policies.

Here is a step-by-step guide to setting up secure access to your private LLM:

  1. Clone the repository
    git clone https://github.com/yanlinw/LLM-As-Chatbot.gitcd LLM-As-Chatbot

  2. Install and set up dstack
    pip install "dstack[aws,gcp,azure,lambda]" -Udstack start
    Once the dstack server starts, log in and create a project using your cloud credentials (AWS, GCP, or Azure).

  3. Create a dstack profile
    Create a .dstack/profiles.yml file under the root of the LLM-As-Chatbot folder. This file should point to the created project and describe the resources.
    Example:
    profiles:
    - name: aws-llm
    project: aws-llm
    resources:
    memory: 16GB
    gpu:
    count: 1
    spot_policy: auto
    default: True

  4. Initialization
    Switch to the dstack server, copy and paste the dstack config command to the terminal. This allows the dstack server to remotely provision the AWS resources. Follow this with the dstack init command.
    dstack config --url http://127.0.0.1:3000 --project aws-llm --token $MY_TOKEN
    dstack init

How to setup Secure Workspace for your private LLM

Now, let’s take the steps necessary to secure access to your own private Large Language Model LLM with Splashtop Secure Workspace. Our connectors provide secure connectivity to your private applications, giving you centralized control over access. Let’s get started.

Step 1: Create a connector and copy the connector token

  1. Log in to your Splashtop Secure Workspace admin account.

  2. Go to the Deployment menu, select Connector, and click Add Connector.

  3. Choose Headless / CLI, fill in the connector name, and click Next. Then choose Linux and click Done.

  4. After creating the connector, click the connector name from the connectors list to view the details. Copy the Token for use below.

Step 2: Create the LLM application

  1. Add the private LLM chatbot service as a private application to provision this service to your employees.

  2. Navigate to the Applications/Applications and click the Add Application/Add Private Application button.

  3. Fill in the form with the application name, 'localhost' as the Host, and '6006' as the Port.

  4. Choose HTTP as the Protocol, choose the connector name created earlier and assign the proper group to this application. Click Save.

Secure Workspace will automatically generate a fully qualified domain name (FQDN) for the private LLM application.

Step 3: Run the app in your cloud

  1. In the LLM-As-Chatbot folder, use the dstack run command to provision the private LLM and Secure Workspace in your cloud (replace $token with the connector token from step 1):
    dstack run . -f ssw-private-llm.yml $token

  2. This command will provision and run LLM-As-Chatbot in your cloud and start the Secure Workspace connector instance.

  3. After everything is up and running, use the FQDN you created to access the private LLM. In the meantime, you can set up entitlement and conditional access policies for this application.

How-To Video Resource

For a step-by-step follow-along guide, watch the video:

Splashtop Secure Workspace - Generative AI & Private LLM
Splashtop Secure Workspace - Generative AI & Private LLM

Conclusion

As enterprises adopt private generative AI and LLMs, safeguarding access to these innovative systems becomes paramount.

Splashtop Secure Workspace can secure remote access to business-critical systems while seamlessly integrating with your existing infrastructure.

By following the steps outlined in this article, you can set up and manage your private LLM secure workspace, protecting your company's proprietary data while maintaining operational efficiency.

The combination of Generative AI and secure access systems will continue to shape the future of business operations, and being proactive in the adoption and secure management of these systems can position your enterprise at the forefront of this transformation.

For early access to Splashtop Secure Workspace, join our waitlist.

Yanlin Wang, VP of Advanced Technology
Yanlin Wang
As VP of Advanced Technology at Splashtop, Yanlin Wang is the driving force behind the Splashtop Secure Workspace. With over 20 years of leadership experience with companies like Fortinet, Centrify, and ArcSight/HP Software – Yanlin has remained at the forefront of the security technology space, with proven experience building award-winning software and top-tier teams. His strong business acumen is evidenced by his multiple patents and contributions to global M&A transactions. Away from the corporate world, his interests include running, table tennis, and calligraphy.

Related Content

IT & Help Desk Remote Support

Unattended Remote Access: Unleash Your Efficiency

Learn More
MSP

What is Remote Monitoring and Management?

Working Remotely

Revolutionizing Cloud based CAD/BIM with Splashtop and Designair

Remote Access Insights

How to Restart a Remote Desktop

View All Blogs
Get the latest Splashtop news
AICPA SOC icon
  • Compliance
  • Privacy Policy
  • Terms of Use
Copyright © 2024 Splashtop Inc. All rights reserved. All $ prices shown in USD.