Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to vpnMentor about a non-password-protected database that contained more than 4.8 million records belonging to Care1 — a Canadian company offering AI software solutions to support optometrists in delivering enhanced patient care.

The publicly exposed database was not password-protected or encrypted. It contained over 4.8 million documents with a total size of 2.2 TB. In a limited sampling of the exposed documents, I saw eye exams in .PDF format, which detailed patient PII, doctor’s comments, and images of the exam results. The database also contained .csv and .xls spreadsheets that listed patients and included their home addresses, Personal Health Numbers (PHN), and details regarding their health.

The name of the database as well as the documents inside it indicated that the records belonged to Care1, a Canadian medical technology company that provides software and AI reporting for optometry doctors specializing in retina and glaucoma treatments. I immediately sent a responsible disclosure notice, and public access was restricted the following day. It is not known how long the database was exposed or if anyone else gained access to it. Only an internal forensic audit could identify additional access or potentially suspicious activity. I received a reply from an administrator immediately after my disclosure notice stating: “Thank you for bringing this to our attention. Our team is currently working on resolving this issue”. It is not known if the database was owned and managed by Care1 directly or via a third-party contractor.

In the Canadian healthcare system, a Personal Health Number (PHN) is a unique identifier that is used to ensure that a patient’s health information is available to all healthcare providers. It is a lifetime identifier that remains the same even if a patient’s personal status changes. While the PHN itself may not directly lead to financial fraud or identity theft, it could potentially be combined with other personal information to create an identity profile on patients. Unauthorized access to an individual’s private medical history or the misuse of services under someone else’s name or PHN are outstanding concerns in the medical industry. Medical information is generally considered to be highly sensitive, and its exposure could have serious privacy implications.

Care1 claims their software has managed over 150,000 patient visits and has over 170 partner optometrists. According to its LinkedIn profile: Care1 is one of the most specialized healthcare technology companies in the world, with an exclusive expertise on massive disruption within eyecare utilizing artificial intelligence. We take radical high-tech ideas, super-charge them with advanced software engineering, leverage our extensive partnership networks, and effect practice-altering changes within the offices of grassroots doctors.




Healthcare systems globally have moved to digital record keeping, which includes the use of cloud storage environments. According to a 2019 survey, an estimated 86% of Canadian family physicians reported using electronic medical records (EMRs). The increased use of EMRs has fundamentally changed the way doctors share and access patient data. However, the added convenience also increases the potential of privacy risks for patients, providers, and health information systems. Medical data is considered among the most valuable data on the dark web with prices as high as $1,000 per record — nearly 40 times more than credit card numbers. Unsurprisingly, medical information systems are often targeted for ransomware attacks. In the United States, the FBI documented 440 ransomware attacks targeting the healthcare sector in 2023 and the first half of 2024. In Canada, on the other hand, health information systems have registered significantly fewer attacks (only 14 major cyberattacks since 2015) that compromised patient data.

In Canada, health data is protected by a patchwork of national and provincial legislation that governs how private sector organizations collect, handle, and store information, including health data. Private sector companies that provide medical software play a crucial role in the healthcare ecosystem by creating products and services that enable doctors to efficiently manage patient data. Medical records by default would include personal identifiers, health histories, and confidential diagnoses. This is why third-party service providers managing health data must take additional steps to enhance their cybersecurity and data protection.

My advice would be to encrypt any documents that contain PII and ensure the database where patient data is stored is configured to restrict public access. Developers should conduct regular testing and patch management, providing software updates to address any vulnerabilities as they are identified. Another thing to consider is implementing multi-factor authentication (MFA) for user authentication. MFA can add an extra layer of protection and prevent unauthorized access. Furthermore, it is crucial to educate healthcare providers so they can recognize phishing attempts or other suspicious signs of unauthorized data access.

Cyberattacks on medical institutions are not only more common nowadays, but they are also increasingly sophisticated, so any vulnerability in third-party software or a database misconfiguration can potentially expose sensitive patient information. I am not saying Care1’s data, or that of their users, and patients were ever at risk or compromised in any manner. I am only providing a general observation of potential cybersecurity risks inherent in medical data management and the use of private sector software providers.

I imply no wrongdoing by Care1, and I do not claim that internal data or user data was ever at imminent risk. The hypothetical data-risk scenarios I have presented in this report are exclusively for educational purposes and do not reflect any actual compromise of data integrity. It should not be construed as a reflection of any organization’s specific practices, systems, or security measures. As an ethical security researcher, I do not download the data I discover. I only take a limited number of screenshots solely for verification purposes. I do not conduct any activities beyond identifying the security vulnerability and notifying the relevant parties. I disclaim any responsibility for actions that may be taken as a result of this disclosure. I publish my findings to raise awareness on issues of data security and privacy. My aim is to encourage organizations to proactively safeguard sensitive information against unauthorized access.