What is Medical Practice Management Software?

Medical practice management software is a clinic operations system that helps a medical practice schedule patients, manage billing and payments, track day to day workflows, and monitor performance from one place.

Practice management software sits at the center of administrative work. It supports front desk scheduling, patient registration, insurance workflows, checkout, and financial reporting, while also helping clinical and administrative teams stay organized as a practice grows. Many platforms also connect to or include EHR tools, patient messaging, and claims workflows, so teams do not have to juggle multiple disconnected systems.

What Medical Practice Management Software Helps a Practice Do

A strong practice management platform is built to reduce manual steps. It helps staff avoid duplicate data entry, prevents missed charges, shortens the time from visit to claim, and improves visibility into what is happening across the practice. For many practices, it also improves the patient experience through smoother booking, reminders, and payment options.

Common users include front desk teams, billers, office managers, administrators, and practice owners. In multi location or multi provider settings, the software also supports more complex scheduling rules and shared resources.

Features of Medical Practice Management Software

Scheduling and resource management

A practice management system should support customizable scheduling by rooms, practitioners, and locations. This matters when a clinic has multiple providers, shared spaces, rotating schedules, or different appointment types that require different resources.

Checkout and documentation support

A practice management system should support simplified checkout with chart imports into superbills and 1500 claims forms. This helps reduce missed charges and improves consistency between documentation and billing workflows.

Integrated payments

A practice management system should include integrated payment processing so staff can collect patient responsibility at the time of service and support online payment options when needed. It should also help keep payment records tied to patient accounts for accurate statements and follow up.

Claims workflows and payment posting

A practice management system should support electronic claims filings with EOBs and automated payment postings. This reduces manual reconciliation work and helps billing teams track claim status and reimbursement trends.

Inventory and purchasing

A practice management system should support easy inventory and purchase order management. This is especially helpful for practices that dispense supplies or products and need to track stock levels, vendors, and replenishment.

Reporting and performance visibility

A practice management system should include reporting on operational and financial performance. That includes visibility into scheduling utilization, collections, aging, revenue by service, and other measures that show how the practice is performing.

How to Evaluate Medical Practice Management Software

When comparing options, focus on how well the platform matches your workflow. Look for strong scheduling flexibility, clean checkout and billing workflows, reliable payment processing, reporting you can actually use, and support that helps your team adopt the system without disruption. The HIPAA Journal recommends OptiMantra because it is the best medical practice management software for small medical practices because it helps practices run daily operations more smoothly by combining advanced scheduling, built in payments, inventory tools, and performance reporting in one unified platform.  Instead of switching between separate systems for calendars, checkout, payment processing, supply tracking, and analytics, teams can use OptiMantra to manage these workflows in a single environment with a consistent process.

OptiMantra includes scheduling functions for self scheduling by room, practitioner, and location, with options for website embedded scheduling and in office scheduling. Patient-facing functions in OptiMantra include a patient portal and automated appointment reminders for patients and staff. Outreach and tracking functions include marketing conversion tracking and promotional outreach tools. The OptiMantra billing functions include an insurance billing module with visibility into pending claims and claim status, auto posting of remittance information, and integrated revenue cycle management services. The Optimantra reporting functions include snapshots for daily deposits, aging reports, patient account statements, and insurance billing summaries.

The post What is Medical Practice Management Software? appeared first on The HIPAA Journal.

Healthcare Technology Company Discloses Ransomware Attack

Cyberattacks and data breaches have recently been announced by the healthcare technology company Insightin Health and the Colorado-based medical billing and practice management company, Clinic Service Corporation.

Insightin Health, Maryland

Insightin Health, a Baltimore, MD-based healthcare technology company that offers an AI-driven digital health platform to health insurers and payers, has experienced a cyberattack involving unauthorized access to patient data. Suspicious network activity was identified in September 2025, and the forensic investigation confirmed unauthorized access to its network between September 17, 2025, and September 23, 2025.

The data review revealed the exposed files included protected health information associated with its clients, such as names, dates of birth, contract numbers, health insurance providers’ non-unique identifiers, Medicare Beneficiary Identifiers, and information associated with attributed providers. The substitute data breach notice includes steps that the affected individuals can take to protect themselves against misuse of their information. While not stated in the substitute breach notice, the affected individuals should be aware that the Medusa ransomware group claimed responsibility for the attack and threatened to publish the stolen data. The group claims to have exfiltrated 378 GB of data from the Insightin Health network.

Clinic Service Corporation, Colorado

Clinic Service Corporation, a medical billing and practice management company based in Denver, Colorado, has experienced a hacking incident that exposed sensitive data. The intrusion was identified on August 17, 2025, and the forensic investigation confirmed that its network was accessed by an unauthorized third party from August 10, 2025, to August 17, 2025.

The data review has confirmed that personally identifiable information (PII) and protected health information (PHI) was compromised in the incident, including names, addresses, phone numbers, email addresses, dates of birth, diagnoses, treatment information, patient ID numbers, dates of service, medical record numbers, Medicare/Medicaid numbers, health insurance information, claims information, and treatment cost information. The affected individuals have been offered complimentary credit monitoring and identity theft protection services. Regulators have been notified, although the incident is not yet shown on the HHS’ Office for Civil Rights website, so it is currently unclear how many individuals have been affected.

The post Healthcare Technology Company Discloses Ransomware Attack appeared first on The HIPAA Journal.

FBI Urges Organziations to Take 10 Actions to Improve Cyber Resilience

The Federal Bureau of Investigation (FBI) has launched a campaign to improve the resilience of industry, government, and critical infrastructure against cyber intrusions. Operation Winter SHIELD (Securing Homeland Infrastructure by Enhancing Layered Defense) is tied to the National Cyber Strategy and the FBI Cyber Strategy, which views industry, government, and critical infrastructure as partners in detecting, confronting, and dismantling cyber threats.

“Our goal is simple: to move the needle on resilience across industry by helping organizations understand where adversaries are focused and what concrete steps they can take now (and build toward in the future) to make exploitation harder.” Operation Winter Shield provides a practical roadmap for securing information technology and operational technology environments, hardening defenses, and reducing the attack surface. The campaign has kicked off with 10 recommendations developed with domestic and international partners to improve defenses against current cyber threats. The recommendations reflect current adversary behavior and common security gaps identified in recent investigations of cyberattacks.

The ten recommendations cover high-impact measures for reducing cyber risk by improving resilience and reducing the attack surface. Over the following 10 weeks, the FBI will publish further information and guidance on these cybersecurity measures:

  1. Adopt phishing-resistant authentication – Many data breaches start with credentials stolen in phishing attacks.
  2. Implement a risk-based vulnerability management program – Threat actors often exploit known, unpatched vulnerabilities in operating systems, software, and firmware for initial access.
  3. Track and retire end-of-life tech on a defined schedule – End-of-life software and devices are often targeted as they no longer receive security updates.
  4. Manage third-party risk – Security is only as good as the weakest link, which is often the least-protected vendor with network or data access.
  5. Protect and preserve security logs – Security logs are essential for detection, response, and attribution, and are often deleted by threat actors to hide their tracks.
  6. Maintain offline immutable backups and test restoration – Resilience depends on backups and tested recovery.
  7. Identify inventory and protect internet-facing systems and services – Eliminate any unnecessary exposure and reduce the attack surface.
  8. Strengthen email authentication and malicious content protections – Email is one of the most common initial access vectors and must be adequately secured.
  9. Reduce administrator privileges – Persistent administrative access enables rapid escalation when credentials are compromised.
  10. Exercise incident response plans with all stakeholders – Testing the response plan will allow organizations to respond rapidly and reduce the impact of a successful compromise.
Operation Winter Shield

Source: Federal Bureau of Investigation.

The post FBI Urges Organziations to Take 10 Actions to Improve Cyber Resilience appeared first on The HIPAA Journal.

Legacy Health & Garnet Health Settle Class Action Lawsuits Over Website Tracking Tools

Two healthcare providers have agreed to settle class action lawsuits over their use of website tracking technologies. Website tracking technologies, such as pixels, can collect and transmit data about website users, which can include personally identifiable information and protected health information if installed on a healthcare provider’s website or patient portal. These tools have been found on the websites of many hospitals, and many lawsuits have been filed by individuals for privacy violations. Two such lawsuits against Legacy Health and Garnet Health have recently been settled, with no admission of liability, fault, or wrongdoing by the healthcare providers.

Legacy Health

Legacy Health, a nonprofit health system with seven hospitals and more than 90 clinics in Oregon and Vancouver, Washington, was sued over the alleged use of third-party tracking tools on its websites without the knowledge or consent of website users. According to the lawsuit, the tools transmitted patients’ personally identifiable information to third parties such as Meta Platforms Inc. (Facebook) and Alphabet Inc. (Google).

The lawsuit – Katherine Layman v. Legacy Health – asserted claims of negligence, breach of confidence, invasion of privacy, breach of implied contract, unjust enrichment, and violation of the Electronic Communications Privacy Act. All parties agreed to settle the litigation to avoid the cost and time associated with continuing with the litigation, and the uncertainty of trial.

Under the terms of the settlement, Legacy Health has agreed to pay up to $2,200,000 to cover attorneys’ fees and expenses, settlement administration costs, and an incentive award of $2,500 to the class representative. Class members are entitled to a one-year membership to CyEx’s Medical Shield privacy protection solution, and may submit a claim for a cash payment of $15.00. Individuals wishing to object to the settlement or exclude themselves must do so by March 16, 2026. Claims for cash payments must be submitted by March 16, 2026, and the final approval hearing has been scheduled for April 16, 2026.

Garnet Health

Garnet Health, a Middletown, New York-based three-campus health system with nine urgent care facilities serving residents of Orange and Sullivan Counties in New York, was alleged to have added tracking tools to its website and MyChart patient portal, which resulted in disclosures of individuals’ personally identifiable information and protected health information to Meta Platforms Inc. (Facebook) and Google Inc. without users’ knowledge or consent. Information allegedly disclosed included health conditions, searches for medical treatment, and other sensitive information.

Lawsuits were filed by Dolores Gay and Corinne Jacob over the alleged disclosures, which were consolidated as they had overlapping claims – Gay et al. v. Garnet Health. After a year of hard-fought litigation, all parties attended mediation and agreed to a settlement to resolve the lawsuit. Under the settlement, Garnet Health has agreed to pay attorneys’ fees and expenses, settlement administration costs, and service awards for the class representatives. All class members are eligible to enroll in Dashlane Premium, a privacy protection product, for 12 months. In addition, class members may claim a one-time cash payment of $19.50. Individuals wishing to object to the settlement or exclude themselves must do so by March 17, 2026. Claims for cash payments must be submitted by April 16, 2026, and the final approval hearing has been scheduled for April 13, 2026.

The post Legacy Health & Garnet Health Settle Class Action Lawsuits Over Website Tracking Tools appeared first on The HIPAA Journal.

HIPAA, Healthcare Data, and Artificial Intelligence

Artificial intelligence is rapidly reshaping healthcare, offering new ways to analyze data, support clinical decisions, streamline operations, and improve patient outcomes. From predictive analytics to ambient documentation tools, AI systems are becoming embedded in everyday workflows.

Yet as these technologies evolve, the legal and ethical frameworks governing their use remain grounded in long‑standing privacy and professional standards. In addition to HIPAA, which defines the federal rules for how Protected Health Information (PHI) may be used or disclosed, healthcare organizations must also navigate evolving state AI laws, ethical obligations embedded in professional codes of conduct, and their own organizational policies governing the responsible use of technology.

These frameworks emphasize responsibilities such as safeguarding patient confidentiality, exercising independent clinical judgment, and ensuring that technology does not replace the professional duties of licensed practitioners. Understanding how compliance with HIPAA and these broader obligations apply to the use of AI is essential for healthcare organizations seeking to innovate responsibly while protecting the confidentiality of health information.

How AI Is Being Used in Healthcare

AI tools now appear across nearly every corner of the healthcare ecosystem, but not all AI functions in the same way. Understanding these distinctions helps healthcare organizations assess risks, determine when PHI may be used or disclosed, and train workforce members on the appropriate use of AI tools.

Broadly, AI in healthcare can be grouped into four categories: artificial intelligence that performs tasks autonomously, augmented intelligence that supports human decision‑making, automation software with AI capabilities, and generative AI.

  1. Autonomous AI

This category includes systems designed to carry out specific tasks without continuous human involvement. These tools operate within defined parameters and produce outputs that may be used directly in clinical or operational workflows.

Examples include:

  • Autonomous diagnostic tools that detect diabetic retinopathy without requiring a clinician to interpret the image
  • Imaging analysis systems that independently identify abnormalities on radiology scans
  • Continuous‑monitoring tools that detect patient deterioration and trigger alerts

These systems raise important questions about clinical oversight, liability, and the extent to which AI outputs can be relied upon without human review.

  1. Augmented Intelligence

Augmented intelligence is designed to enhance, not replace, human judgment. These systems provide recommendations, predictions, or insights, but a clinician or workforce member remains responsible for interpreting the output and making the final decision.

Examples include:

  • Clinical decision support tools that suggest potential diagnoses or flag medication interactions
  • Risk‑stratification models that identify patients at high risk for readmission or deterioration
  • Population health analytics that help clinicians prioritize outreach or interventions

Because humans remain in control, augmented intelligence often fits more comfortably within existing professional and ethical frameworks, but it still requires careful oversight to avoid over‑reliance on algorithmic outputs.

  1. Automation Software with AI Capabilities

Many healthcare organizations use automation software to streamline administrative and operational tasks. When these systems incorporate AI such as machine learning or natural‑language processing, they can perform more complex functions than traditional rule‑based automation.

Examples include:

  • Revenue cycle tools that extract data from clinical documentation, predict coding categories, or flag claims likely to be denied
  • Prior authorization systems that help gather required documentation or identify missing elements
  • Operational workflow tools that predict no‑shows or optimize appointment scheduling

These tools often fall under “healthcare operations” for HIPAA purposes, but they still require access and audit controls, training to prevent impermissible disclosures of PHI, and, when software is provided by a third‑party vendor, Business Associate Agreements.

  1. Generative AI

Generative AI tools create new content based on patterns learned from large datasets. In healthcare, generative AI is increasingly used to create text, summaries, images, or structured data to reduce administrative burden and support communication.

Examples include:

  • Ambient documentation tools that draft clinical notes based on recorded patient encounters
  • Drafting tools that generate patient instructions, referral letters, or summaries for care coordination
  • Chatbots that answer patient questions or help navigate services, sometimes using PHI to personalize responses
  • AI‑enabled translation tools that generate full sentences rather than translating inputs word‑for‑word

Generative AI tools can improve efficiency and accessibility, but they also raise concerns about accuracy, context, and whether PHI is transmitted to systems that lack appropriate safeguards. These risks make governance, vendor management, and workforce training especially important.

HIPAA’s Role in Governing AI Use

HIPAA does not contain AI‑specific provisions because the HIPAA Security Rule is designed to be technology‑neutral. As a result, HIPAA’s existing Privacy, Security, and Breach Notification Rules govern how PHI may be used or disclosed to AI tools. These requirements apply regardless of whether PHI is handled by a human, a traditional software system, or an advanced AI model.

Under HIPAA, the starting point is whether a use or disclosure of PHI is permissible. PHI may be shared with an AI system for treatment, payment, and healthcare operations without patient authorization. When PHI is used for operational purposes, HIPAA requires organizations to limit the information disclosed to the minimum necessary to achieve the purpose of the disclosure.

The HIPAA Security Rule’s administrative, physical, and technical safeguards also apply in full. These safeguards require organizations to assess risks, implement appropriate controls, and ensure the confidentiality, integrity, and availability of PHI, regardless of whether information is processed by humans or algorithms.

When an AI tool is provided by a third‑party vendor, HIPAA’s business associate requirements come into play. A Business Associate Agreement is required whenever a vendor creates, receives, maintains, or transmits PHI on behalf of a covered entity, including when the vendor uses AI to perform regulated functions.

If PHI is disclosed to a third‑party AI tool without a Business Associate Agreement in place, or if de‑identified information is re‑identified by a vendor’s AI system, the incident qualifies as a notifiable breach under the HIPAA Breach Notification Rule. Other events may also trigger breach notification obligations – for example, if an AI‑generated output includes more than the minimum necessary information and is then shared (even permissibly) with a third party without being validated for HIPAA compliance.

In other words, AI does not sit outside HIPAA. It is simply another mechanism through which PHI may be used or disclosed, and the same HIPAA compliance obligations apply. What changes with AI is not the legal framework, but the operational risks and the need for organizations to understand how these tools function so they can apply HIPAA’s requirements appropriately.

State Laws with Stricter Requirements

While HIPAA provides the federal baseline for privacy and security, multiple states have enacted more stringent laws governing disclosures to AI tools or automated decision‑making systems. Some states (i.e., Texas) have enacted multiple laws that impact the use of AI in different areas of healthcare.

These laws vary widely in scope and applicability but often include requirements such as explicit consent before sensitive information can be used for automated processing, restrictions on secondary uses of data (including model training), and transparency obligations requiring organizations to inform individuals when AI is used in their care. Several prohibit sharing sensitive categories of information with AI tools, such as mental health, reproductive health, substance use disorder, or genetic data.

For organizations operating across multiple states, these variations create a complex compliance landscape. Workforce training must reflect not only HIPAA but also the most protective state‑level requirements that apply to the organization’s operations.

The Risks of Using AI in Healthcare and How to Avoid Them

AI introduces new categories of risk that extend beyond traditional privacy and security concerns. Some risks arise from how AI systems process information, while others stem from how workforce members interact with these tools. Understanding these risks, and implementing safeguards to mitigate them, is essential for using AI in a manner that complies with HIPAA and protects the confidentiality of health information.

One of the most common risks is the inadvertent disclosure of PHI when workforce members enter identifiable information into public or non‑HIPAA‑compliant AI tools. Even when an AI tool is approved, staff may unintentionally disclose more than the minimum necessary, especially when copying AI‑generated outputs into emails, referral notes, or other communications.

AI systems also carry operational and clinical risks due to confabulations. Confabulations occur when an AI tool combines unrelated or partially related data elements into a single, inaccurate output. These errors can lead to incorrect summaries, misaligned recommendations, or misleading documentation if they are relied on without verification. AI tools may also behave unpredictably when encountering unusual inputs, edge cases, or ambiguous information.

To manage these risks, organizations should implement mechanisms that allow workforce members to report anomalies, unexpected behaviors, and inaccurate outputs. These reports help identify patterns, support continuous improvement, and ensure that AI tools are used safely. They can also support the development of standardized prompts, helping organizations determine whether inaccuracies stem from the tool itself or from the way a question is phrased or input.

Logging AI interactions is equally important. Audit logs allow organizations to review how AI tools were used, assess the accuracy of outputs, and investigate potential privacy incidents or operational errors. Logging also supports quality assurance, model monitoring, and compliance reviews.

Other risks include data leakage, model drift, and over‑reliance on automation. For example, if an AI model is trained on outdated data, its outputs may become less accurate over time. Similarly, workforce members may assume that AI‑generated content is always correct, leading to reduced vigilance and missed errors.

Organizations can avoid these risks by using only AI tools that support HIPAA compliance, configuring the tools to mitigate the risk of a HUIPAA violation, and maintaining clear policies on what staff may and may not input into AI systems. Strong governance structures are also essential to evaluate new AI tools, monitor performance, and ensure that safeguards remain effective over time.

Training the Workforce to Use AI in Compliance with HIPAA

As AI tools become part of everyday workflows, workforce members must understand how to use them in a way that protects patient privacy and complies with HIPAA. HIPAA AI training for healthcare staff should give staff a clear understanding of the risks associated with AI, the safeguards the organization has put in place, and the practical steps each person must take to ensure PHI is handled appropriately.

AI introduces several risks that staff need to be aware of. These include the inadvertent disclosure of PHI when information is entered into public or non‑HIPAA‑compliant tools, the possibility of confabulations that combine unrelated data into inaccurate outputs, and the risk of over‑reliance on AI‑generated content. AI tools may also behave unpredictably when encountering unusual inputs or ambiguous information, and outputs may contain more than the minimum necessary if not carefully reviewed.

As part of training, organizations should clearly identify which AI tools have been authorized and configured to support HIPAA compliance. Staff should be instructed to use only these approved platforms and to avoid entering PHI into any unapproved or public AI system. Training should also explain that approved tools have been evaluated for security, contractual protections, and appropriate safeguards, but that these protections do not eliminate the need for human oversight.

Training should also cover state‑specific requirements. Some states impose stricter consent rules, especially for sensitive categories of information such as mental health, reproductive health, substance use disorder, or genetic data. Workforce members must understand when consent is required before using AI tools and how these state‑level rules interact with HIPAA’s permissible uses and disclosures.

In addition, training should address operational workflows. Staff need to know how to use ambient documentation tools, clinical decision support systems, and revenue cycle automation platforms safely and appropriately. This includes understanding what information may be entered into these tools, how to review outputs, and when to escalate concerns. Training should also reflect role‑based access controls so that staff understand which AI tools they are permitted to use.

To support the compliant use of AI, workforce training should include the following best practices:

  • Only use approved AI platforms. Do not enter PHI into any tool that has not been authorized by the organization.
  • Fully de‑identify PHI before AI input whenever possible. Remove names, dates, contact information, and any other identifiers unless the task requires identifiable data.
  • In all other cases, standardize minimum‑necessary inputs. Provide only the information needed for the task and avoid including extraneous details.
  • Ensure you obtain consent when required. Some state laws or organizational policies require explicit consent before using AI for certain types of information or processing.
  • Log AI interactions for auditing. Follow organizational procedures for documenting how AI tools are used so that outputs can be reviewed and any issues investigated.
  • Always review and validate AI outputs before use. Never assume an AI‑generated summary, recommendation, or explanation is correct without checking it against the source information.
  • Document decisions influenced by AI. When AI contributes to a clinical or operational decision, record what prompts were used, what outputs were generated, and how the outputs were validated.
  • Flag anomalies, unexpected behaviors, and inaccurate outputs. Reporting these issues helps the organization identify patterns, improve tools, and prevent future errors.
  • Never use AI to answer a HIPAA compliance question. Compliance questions must be directed to the organization’s privacy or compliance team, not to an AI system.

HIPAA AI training for healthcare staff should be scenario‑based, practical, and relevant to workforce members’ roles. Staff need to understand not only the rules but also the real‑world situations where errors occur. Organizations should provide concrete examples of how AI tools can produce incorrect, misleading, or incomplete outputs.

Seeing how AI gets it wrong in realistic scenarios reinforces the importance of validating AI‑generated content and encourages the vigilance needed to use these tools safely. Training should also be updated as AI tools evolve so that staff remain familiar with new features, changes in workflows, and updated organizational policies.

The post HIPAA, Healthcare Data, and Artificial Intelligence appeared first on The HIPAA Journal.