
- This event has passed.
Raleigh ISSA October 3, 2019 Chapter Meeting at RTP Headquarters
October 3, 2019 @ 5:00 pm - 8:00 pm

Chapter Event
Meeting Sponsor
Members, PLEASE RSVP
Guests
- https://www.eventbrite.com/e/raleigh-issa-monthly-meeting-guest-tickets-30871733185?aff=ebdssbdestsearch
- Guests: If this is your first meeting use code 2019firstvisit upon checkout. Remember to register before the day of the event. This code will no longer be valid after 2 January.
AGENDA
5:15 – 6:00pm Career Services (Conference Room 1)
5:15 – 6:00pm Back-to-Basics (Main Room)
6:00 – 6:45 pm Dinner Randy’s pizza / Drink / Socializing (Lobby)
6:00 – 8:00 STORM CTF – TEASER https://stormctf.ninja/ctf/events/infosecon-2019
7:00 – 7:15pm Board Updates (Main Room)
7:15 – 8:30pm Main Presentation (Main Room) John Cloonan – “The use of artificial intelligence (AI) and machine learning in security”
B2B – 5:15PM
Topic:
Presenter:
Main Presentation – 7:15PM-8:30PM
Topic: “The use of artificial intelligence (AI) and machine learning in security”
John Cloonan
Director of Products – Lastline
BIO:
John Cloonan is Director of Products for Lastline with a passion for creating innovative information security solutions. Of his nearly 25 years of professional experience, he has spent more than 15 years in Information Security software development and service delivery. Prior to Lastline, John was the Program Director for Threat Intelligence at IBM, and previously worked at Tripwire, SecureWorks, and GuardedNet.
Abstract:
The use of artificial intelligence (AI) and machine learning in security is nothing new as security companies have started applying AI techniques to a wide variety of tasks, such as malware analysis and breach detection. However, they too often have an unrealistic expectation that it’s the solution to many, if not all, of today’s security problems.
The security community is largely ignoring the problem of “adversarial machine learning” in which information about the learning process is used to subvert the learning process itself and bypass security mechanisms, leading to unexpected failures.
·What if a cyber-criminal knows what process is used to perform machine learning and what parameters have been learned?
·What if the adversary can pollute the datasets that are used as a basis for the learning process?
·Will the cyber-criminal be able to change his attacks so that they appear to conform to what has been learned to be “normal”?
We are already seeing the prodromes of this new shift in attack techniques, in which cybercriminals continuously change the appearance of their malware to prevent the use of automatically learned signatures. Soon, we will see attackers target explicitly machine learning and AI.
This talk presents how AI and machine learning currently is used in various security fields, from autonomous hacking to malware classification and network anomaly detection, and describes a number of attacks that have been carried out against machine learning approaches.