By Dyanne L. Peters, Esq.
This article first appeared in our Communicator Magazine, Winter 2026 Issue.
When you hear the term "AI," it may conjure images of robots taking over the world as seen in popular movies like The Matrix, Terminator, or iRobot. Despite these sci-fi depictions that create a technological dystopian vision of the future (and an entertaining film), the use of this tool has grown tremendously over the last several years to the point where AI has become integrated into our everyday lives. While it is becoming a more widespread and useful technology, it is still in the beginning stages, and there are a lot of issues to navigate and consider when using AI.
In the context of use for everyday functions by a community manager, AI is certainly convenient and useful. AI is becoming a powerful addition to a manager’s toolkit, but it is just that: a tool. AI cannot replace a manager’s critical thinking, problem solving, experience, and due diligence. While AI can do many things well, it cannot (and should not) do it all!
UNDERSTANDING AI, ITS SCOPE, AND ITS LIMITATIONS
The first step to using a new technological tool like AI is to have an understanding of how it works. There are two types of AI tools that are most commonly used: "extractive AI" and "generative AI." Extractive AI is where the program will find what the user is looking for, such as a simple Google search. Generative AI, on the other hand, is where the AI system will review and analyze information and generate an answer to a query. An example of generative AI is ChatGPT.
Next, a user should understand what sort of system the AI program is: "closed loop system" or "open loop system." These systems are what the AI is trained on, or in other words, the resources the AI program will utilize or pull information from. A closed loop system only pulls from a specific set of information, while an open loop system utilizes publicly available information. When a user puts information or makes a query in an open loop system, that system will integrate that information from the use into the public system. This means that open loop systems are likely not secure and may implicate security and confidentiality of information. If it’s free, it is probably not secure is a useful maxim.
It is also important for an AI user to have an understanding of "hallucinations." Hallucinations are fabricated content produced by generative AI platforms that are presented as if true and factual. Generative AI is designed to help or assist the user and thereby wants to produce the "correct" answer in response to your query. This may result in ambiguity, contradictory information, false, or even "made up" information. That is because it is designed to give a plausible answer, not necessarily the correct answer.
USE OF AI IN COMMUNITY MANAGEMENT
There are a number of ways that managers can integrate AI into their day-today responsibilities. One of the most immediate and practical uses is during board meetings. AI can be extremely helpful for taking notes, generating meeting summaries, and organizing key discussion points. This material can then assist the manager in preparing meeting notes, creating to-do lists, tracking action items, and following up on board directives.
Another valuable use of AI is in the area of communications. AI can provide a first draft of an email or help managers strike the right "tone" in a message or letter. Managers can also take a more creative approach by using AI to design community newsletters, draft announcements, or prepare regular updates to residents. AI tools can even assist with preparing more formal written communications, such as violation letters, delinquency notices, or demand letters.
AI may also prove useful when managers are reviewing and summarizing complex documents like CC&Rs, bylaws, rules, contracts, and insurance policies. For example, an AI system may be able to identify specific provisions in the governing documents or create a concise summary of insurance or maintenance obligations. This can help managers better understand complicated information and more effectively present it to the board or the membership.
Beyond these core uses, AI can support managers in several other helpful ways. AI tools can assist in organizing maintenance schedules, generating draft agendas, or creating checklists for annual disclosures. AI can also help managers draft routine forms and build templates for recurring tasks. Overall, when used thoughtfully, AI can enhance a manager’s efficiency and improve communication with the board and to the community.
POTENTIAL PITFALLS AND BEST PRACTICES
All of the potential use of AI described above will certainly streamline a community manager’s work by giving managers the ability to accomplish everyday and routine tasks more quickly and efficiently. There are certainly a lot of upsides to the use of AI for some of these tasks; but, it is important to remember that there are some pitfalls when using AI. At the end of the day, the manager, not the AI tool, will be responsible for the work produced. This means that to avoid any issues with AI hallucinations, managers should carefully review all AI produced material to make sure that it comports with the facts.
If using AI for meeting notetaking, the manager should review this summary right away to check for any misstatement of fact or inaccuracies. This means checking that motions, decisions, votes, and recitation of board investigations are correct. This is important for both the association records, but also to protect the association and the board members for their decisions under the business judgment rule. It is also important that managers proceed with caution if they want to record video conference meetings. These recordings may be considered part of the association’s records and in a lawsuit, these may be discoverable. Therefore, it is generally advised that board meetings should not be recorded.
Additionally, before using recording or AI tools, it is always a good idea to inform the board and obtain their prior approval. Along the same lines, any summaries of documents or letters that AI produce should also be reviewed for accuracy by the manager.
Additionally, managers should be wary about the information they are putting into AI systems, especially if the system is an open loop system. Managers have a duty (and usually a contractual obligation) to keep client information secure and confidential. This means that managers and management companies should be cautious about putting board member personal information and confidential contracts or other documents into AI systems that are not secure.
Today’s best practices reflect a simple duality: be bold in embracing AI’s upside and be vigilant in managing its pitfalls. Managers must also be aware that at the end of the day the buck stops with the manager. There is no "AI exception" to manager due diligence requirements. If the AI system makes a mistake or produces a hallucination, it falls on the manager to review and verify the source material. Management companies should look into adopting company AI policies that can address security and confidentiality issues, as well as procedures for making sure that managers are adequately informed, trained, and monitored on AI use, and that clients are being informed about how their information is being used and prior consent for that use is being obtained. Commitment to continuous learning, well-defined policies, secure platforms, and disciplined human oversight enables managers and management companies to adopt AI early without compromising professional obligations.
Dyanne L. Peters, Esq. is a Senior Attorney at Tinnelly Law Group and has been practicing law in the CID industry since 2016. As part of the CAI-OC Education Committee, she teaches community leadership and other educational programs for which she was recognized as Committee Member of the Year in 2022 and 2024, and publishes articles on industry relevant topics frequently. She has extensive experience providing legal counsel to HOAs across all of California.