https://youtu.be/vlb7RTJ7jN4?feature=shared

1. Introduction: Product Design Enters the Era of Automation

Most of the products we encounter in everyday life begins with a product design. From smartphones and refrigerators to cars and medical devices, all products go through a process of conceptualization and development. When we think of this process, we typically imagine an expert manually drafting designs using CAD software. In fact, traditional product design has long been a labor-intensive and time-consuming task reliant on human expertise.

However, recent advancements in AI and computer vision are fundamentally changing this paradigm. We’ve now entered an era where basic product designs can be generated automatically from just a piece of text or an image. This technology dramatically accelerates product development, reduces repetitive tasks, and significantly cuts down the time and cost of turning ideas into real-world outcomes.

In this page, we’ll explore how the emerging technology of Text/Image to CAD works—and how it can transform the way companies approach product development.

2. How Text/Image-to-3D differs?

A commonly addressed concept alongside Text/Image to CAD is Text/Image-to-3D technology. At first glance, the two may seem similar—both use AI to generate 3D models based on text input—but they differ significantly in terms of purpose, output, and practical applications.

Text/Image-to-3D technology primarily focuses on visualization. When a user inputs a sentence like “a cactus in a flowerpot” or “a cat in a spacesuit,” the AI imagines and generates a 3D representation of that scene. This process is highly useful for entertainment, gaming, metaverse content creation, or rapid concept visualization. The resulting 3D models are typically mesh-based and unconstrained, allowing for creative and imaginative outputs. However, they lack the precise dimensions, tolerances, and geometric constraints required by CAD systems, making them difficult to apply to product design in the real world of manufacturing.

On the other hand, Text/Image to CAD technology is built with actual product design and manufacturing in considerations. Rather than generating merely what “looks right,” this technology focuses on producing structurally accurate data for industrial use. For example, a prompt like “create a 200mm-long, 2mm-thick aluminum plate with two screw holes” can be interpreted precisely and converted into a CAD file ready for production.

In essence, while Text/Image-to-3D is geared toward imaginative rendering, Text/Image to CAD aims to generate real, manufacturable designs.

3. What Sets TRINIX’s Text/Image to CAD unique value proposition?

Text/Image to CAD refers to the technology that automatically generates 3D CAD models from text or image input. However, most conventional approaches still require precise technical instructions—such as dimensions, thickness, or component positions—to produce meaningful output. In other words, while product design is becoming more accessible, it still demands a certain level of engineering knowledge.

TRINIX, developed by NdotLight, breaks this barrier. With TRINIX, a simple prompt like “create a smartphone similar to a Galaxy or iPhone” is enough. The AI interprets this description, generates the corresponding 3D geometry, and converts it into a CAD-ready format. When using an image, such as a sketch or product photo, TRINIX can also generate a structurally similar CAD model automatically.

This capability is made possible through the convergence of large language models (LLMs), generative AI, and a NdotLight’s proprietary 3D CAD engine. At its core are deep learning–based shape prediction and text-to-shape mapping technologies. The model is trained on a large-scale CAD dataset and is able to infer user intent to produce manufacturing-ready outputs.

In the past, a PM would describe their idea to a designer, who would then collaborate with an engineer to complete the CAD work. TRINIX simplifies all of that into a single step. This not only speeds up the realization of product ideas but also helps reduce prototyping costs.

4. How TRINIX’s Text/Image to CAD Solution Works

NdotLight’s Text/Image to CAD–based product design automation platform operates through four core stages:

1. Input Pre-processing

The user’s prompt—whether a sentence or an image—is first preprocessed and converted into a format interpretable by AI. This ensures the input is clean, structured, and contextually optimized for downstream inference.

2. Intent Interpretation