Plastic surgeon Brandon Richland, MD, discusses how he uses AI in his practice and outlines key questions surgeons should ask before adopting AI tools in plastic surgery.


Artificial intelligence is rapidly reshaping the landscape of aesthetic medicine, with plastic surgeons increasingly turning to AI to enhance consultations, improve surgical planning, and monitor post-operative outcomes. But as the technology becomes more accessible, it also raises critical questions around safety, accuracy, liability, and patient trust.

To explore how AI is being integrated into private practiceโ€”and what surgeons need to consider before adopting these toolsโ€”Plastic Surgery Practice spoke with Brandon Richland, MD. A board-certified plastic surgeon with practices in California, Nevada, and soon Texas, Richland outlines how AI fits into his clinical workflow, how it supports (rather than replaces) medical judgment, and how he communicates its role to patients. He also provides a practical list of questions plastic surgeons should ask before investing in an AI platform, including how the tool was validated, whether it was tested on diverse patient populations, and who holds liability in the event of an error.

Brandon Richland, MD

Plastic Surgery Practice: Youโ€™ve been vocal about the growing role of AI in plastic surgery. What first drew you to explore these technologies, and in what ways are you currently integrating AI into your own practice?

Brandon Richland, MD: My journey into AI wasn’t sparked by a fascination with technology for its own sake, but by a fundamental challenge inherent in modern medicine: the overwhelming scale of data. AI has been instrumental in augmenting my knowledge base and skills, not replacing them. It handles the massive-scale data processing and pattern recognition, while I handle the critical thinking, human communication, and ethical decision-making. It is a powerful symbiosis that is fundamentally elevating the standard of care we can provide.  

PSP: From pre-op consultations to post-op monitoring, AI is touching many aspects of patient care. Which applications have had the most meaningful impact in your workflow so far, and how do you measure that valueโ€”whether itโ€™s efficiency, accuracy, or patient outcomes?

Richland: In my practice, we have experience with AI before and after simulators. These AI algorithms that have been trained on millions of medical images, and by inputting various patient factors, can help predict what a patient’s desired post-operative outcome may look like. However, this type of software must be used cautiously, as no surgeon or software can predict a patient’s post-operative result with 100% accuracy, and the patient must be counseled accordingly.

PSP: As more plastic surgeons consider adopting AI tools, what key questions should they be askingโ€”both from a clinical and operational perspectiveโ€”before making an investment?

Richland: The excitement around AI is palpable, but excitement alone doesn’t build a successful, safe, and profitable practice. Here are a few questions to consider before making a significant investment in AI.

  1. How was the AI model validated? This is my number one question. Ask the vendor for the data on which their algorithm was trained and, more importantly, validated. Was it trained on a diverse patient population that reflects my practice here in Southern California (eg, varied ethnicities, skin types, ages)? An AI trained on a homogenous population may have significant biases and inaccuracies when applied to a different demographic. You need to see the proof that it works for your patients.
  2. What are its known limitations and accuracy rates? No tool is perfect. A reputable vendor will be transparent about the AI’s limitations. In what scenarios is it less accurate? For an aesthetic simulator, does it struggle with certain skin tones or in patients with previous surgery? For a reconstructive planning tool, what is its published margin of error? You need to understand the tool’s boundaries to use it responsibly.
  3. How does this enhance, not replace, my clinical judgment? The AI should be a “cognitive partner,” not a “black box.” Does the tool show you why it’s making a recommendation? For example, if it suggests a specific implant size, does it show you the tissue strain analysis or volume calculations it used? I would never invest in a tool that simply gives an answer without providing the underlying data. My medical license is on the line, and I need to be the ultimate decision-maker.
  4. What are the medico-legal implications? You must ask: Who assumes liability if the AI contributes to a poor outcome? If a simulation creates an unrealistic expectation that leads to a dispute, or a planning tool misses a critical piece of anatomy, where does the responsibility lie? Review the vendor’s terms of service with a lawyer. Understand your liability before you integrate the tool into your standard of care. Have any contracts reviewed by your attorney.

PSP: Thereโ€™s growing public skepticism around AI, particularly when it comes to health and aesthetics. How do you address patient concerns about AI in your consultations, and what messaging has helped build trust?

Richland: At Richland Aesthetics in Newport Beach, my patients are sophisticated, well-informed, and rightly cautious about new technology, especially when it concerns their face and body. The headlines about AI are impossible to ignore, so Iโ€™ve learned that addressing these concerns proactively, honestly, and clearly is fundamental to building trust. I explain to patients that I am still the clinician making the final judgment, but there are instances in which AI may be helpful or useful. It’s always important to remember that AI is never a replacement for sound clinical judgement

PSP: As you mentioned, AI-generated imaging can help patients visualize surgical outcomes, but it also raises questions about expectation management. How do you ensure patients understand whatโ€™s realisticโ€”and whatโ€™s still just a model?

Richland: This is a very important question. Expectation management has always been fundamental to cosmetic plastic surgery. It is common for patients to bring in photos of their favorite celebrities or stars and ask to have a “nose like Blake Lively,” “fox eyes like Bella Hadid,” or “jawline like Brad Pitt.” It’s important to explain to the patients what is possible and what may not be possible given the patients existing anatomy. This is something that only comes with years of experience and is unlikely to be replaced by AI any time soon.

PSP: Looking ahead, what do you see as the most immediate opportunitiesโ€”and potential pitfallsโ€”for AI adoption in private practice plastic surgery over the next few years?

Richland: Before a major operation, I perform a thorough history and physical examination with the patient to ensure that the patient is medically optimized for surgery and to perform risk stratification. AI may help augment this process by performing predictive analytics for risk stratification. One could input patient data (age, comorbidities, lab values, etc) into a predictive model. The AI analyzes this against a massive dataset to predict the likelihood of specific post-operative complications, such as infection, risk of delayed healing or wound complications, bleeding, DVT, or hospital readmission. If a patient is flagged as high-risk, we can then implement pre-emptive strategies, to decrease the risks.

One of the biggest pitfalls I see regarding AI is an overreliance on this new technology. These systems are incredibly powerful, but they are not infallible. An AI can process a million more data points than a human, but it has no common sense, no intuition, and no ability to understand the unique, unquantifiable context of a patient’s life. This is compounded by the “black box” problem. Many advanced algorithms, particularly deep learning models, can’t fully explain their reasoning. An AI might flag a skin lesion as suspicious, but it can’t articulate why in a way that a surgeon can. Over-reliance means accepting that conclusion without the crucial step of independent, critical human verification. The moment we stop asking “Why?” and simply accept the AI’s answer is the moment we abdicate our primary responsibility as physicians. PSP

Photo: ID 118737166 ยฉ Leowolfert | Dreamstime.com