Upcoming Chapter & Industry Events
ISSA NE Chapter's Upcoming Meeting Schedule
Our April webinar will be held on April 10th from 12 - 1 pm
Topic
Stories from the Front Lines of the Ransomware Pandemic in Healthcare
Summary
Justin Armstrong will draw on the lessons learned from the nearly 100 ransomware incidents at Hospitals where he was
engaged. Join us on this journey where we will encounter a variety of strange scenarios — ransomware
combined with insider threat; the EHR is not encrypted but the Hospital is still down; the decryption keys worked
but the data is still unusable.
We will cover such topics as:
·
Adopting a whole organization approach to ransomware preparedness
To pay or not to pay?
Does it constitute a data breach?
How has ransomware evolved, and what can we expect next?
Strategies and Tactics for Incident Response
Key Take Away
A deeper understanding of how ransomware occurs in Healthcare and how to prepare.
Zoom registration link: https://us02web.zoom.us/meeting/register/tZAlcuisqj0rHdKXr3Fy-cb0P2XCXsBCP5eV
Bio
Justin Armstrong is a security, privacy, and regulatory compliance consultant with over 25 years of experience
in the Healthcare Industry. He led Product Security at MEDITECH, a top three Electronic Health Record vendor.
Justin is deeply committed to securing critical infrastructure. He has been a proud member of InfraGard — a
partnership between the FBI and the private sector — since 2015. He has provided extensive threat intelligence
and guidance to Healthcare Executives, Management, and Technical staff. He has engaged with Hospitals in
nearly 100 ransomware incidents.
He holds the CISSP and HCISPP certifications and obtained his Masters in Cybersecurity Leadership at Brandeis
University.
Please plan to join us for an in person meeting May 20th from 10 am - 3:30 pm
> The theme of this meeting will be AI: the benefits, the security and privacy risks, and how to manage
those risks.
Location: The Connors Center at Boston College in Dover, MA
Registration will begin at 9:30 am
Coffee and lunch will be provided
Please pre-register for this event:
https://www.issa-ne.org/chaptermeetingregistration
Agenda [10 am - 12 noon]
KeyNote Speaker
Scott Weiner
AI Lead @NeuEon,Inc.
Topic
AI presentation and Q&A session
Join us for a thought-provoking exploration of the intersection between generative AI, cybersecurity, and responsible innovation. As AI continues to evolve rapidly, understanding its inner workings is key for making informed security and compliance decisions. AI strategist Scott Weiner will dive into the fascinating world of large language models (LLMs) using ChatGPT an example, demystifying the technology for security professionals.
Drawing from his practical experience advising companies and his informative lectures for congressional staff and policy discussions on AI, Scott will offer an insightful walkthrough of how LLMs operate, exploring the fundamentals of this game-changing technology.
Attendees will gain insights into key security and privacy considerations when working with generative AI and potential implications of LLMs for cybersecurity and data protection
SPEAKER BIO
Scott Weiner is a technology leader and strategic advisor with a passion for driving innovation at the intersection of cutting-edge technologies and responsible practices.
Understanding the significant potential of AI, Scott engages with policymakers, executives, and technology teams to help them understand the implications and leverage this transformative technology. His insights on generative AI have informed discussions with congressional staff, complementing his engagements in policy-related conversations on privacy, patent law, and regulatory frameworks.
With a unique foundation in technical knowledge, industry experience, and policy awareness, Scott assists organizations in navigating AI's potential responsibly, emphasizing security and compliance.
Lunch [12 - 12:45 pm]
Agenda [12:45 pm - 1:45 pm]
Generative AI Risks: The Challenges of Using ChatGPT in a Business Environment
ChatGPT and other generative AI models have seen explosive growth in their popularity and use. These models are applied in a wide range of applications and are exceptional in the value they provide. However, these models carry inherent security and privacy vulnerabilities that make them highly susceptible to data and image manipulation as well as privacy breaches.
Join this session to gain an understanding of:
Generative AI security and privacy vulnerabilities
ChatGPT implications on data privacy, GDPR compliance, intellectual property and copyright infringement, and misinformation and biased information
Tangible steps and AI-specific security frameworks you can leverage to mitigate AI risks in a business environment.
Speakers
Arun Subramoniam – Kaisura Cofounder, CEO, and Secure-AI Principal Consultant
Arun is also an engineer and entrepreneur with proficiency in managing and growing businesses. Arun has over twenty years of experience constructing robust information security and cybersecurity processes and assembling high-performing teams. Since 2019, he has been actively engaged in research and development of a comprehensive AI-focused security framework and a product innovation that detects adversarial manipulation of AI-enabled technologies. The motivation that drives these endeavors stems from Arun’s determination to significantly enhance the security and data privacy of AI deployments.
Maryann Conway – Kaisura Co-founder, Chief AI Strategist, and Secure-AI Consultant
Maryann has decades of real-world cross-industry experience leading organizations in developing, implementing, and managing all aspects of secure-AI, cybersecurity, and information security. Consequently, she is equipped with a deep understanding of the intricate challenges and nuances within these domains. This speaks to Maryann's expertise in crafting robust strategies, fostering a culture of security awareness, driving innovation to safeguard critical assets, and upholding the confidentiality, integrity, and availability of digital and AI ecosystems.
Agenda [2 pm - 3 pm]
Building a Secure Foundation: A Secure and Responsible Adoption Journey
Abstract
In today's data-driven world, Artificial Intelligence (AI) offers tremendous potential to revolutionize how companies operate. However, unlocking this potential requires a robust framework to ensure responsible and compliant AI use. This presentation dives into the essential components of a Governance, Risk, and Compliance (GRC) program specifically designed for AI implementation within your organization. *This abstract has been created with the assistance of Google’s Gemini
Speaker Bio
Candy Alexander, CISSP CISM
Candy is a highly accomplished cybersecurity professional with over 35 years of experience driving security strategy and innovation. Currently, she serves as the Chief Information Security Officer (CISO) at NeuEon, leading their cybersecurity posture and spearheading their Cyber Practice. A testament to her leadership and expertise, Candy served as the past president of the International Information Systems Security Association (ISSA), a leading international professional association for cybersecurity leaders. Recognized as a thought leader in the field, Candy is a frequent speaker at industry conferences and workshops, sharing her insights on the evolving cybersecurity landscape. Her expertise is also sought after by media outlets, where she offers informed commentary on critical cybersecurity issues. Throughout her career, Candy has made significant contributions to the advancement of cybersecurity practices, fostering a more secure digital environment for all.
July will be a virtual meeting/fireside/pool chat for 1-2 hours
[date and additional details will be provided shortly]