It has been about a week since the challenge started, please let us know your thoughts about the Cyber Analyst Challenge is going? What has been the most challenging? What has worked well? What was most surprising? We look forward to your thoughts!
National Cyber Analyst Challenge
It has been about a week since the challenge started, please let us know your thoughts about the Cyber Analyst Challenge is going? What has been the most challenging? What has worked well? What was most surprising? We look forward to your thoughts!
Keith M. Hoodlet says
I think that the most difficult part of the challenge for our team was simply a matter of figuring out how and where to start. To overcome this, we worked out the scenario on a white board of what we might expect to see, what indicators we were provided with, and how the information we were provided might all come together. Once we came to a consensus on what we should be looking for – and how to go about finding that information – the process of performing forensic analysis became much easier to complete.
As to the most surprising aspect, our team was fairly impressed with the capabilities and strengths that can be leveraged with Wireshark. Some of the team had exposure to Wireshark in the past, but never to the same extent as it has been used in performing our Phase I analysis. We took advantage of a great number of the features available in Wireshark to sort out the signal from the noise in this phase of the challenge, and were very happy with the result.
In summary, I think that whiteboarding the scenario and continuing to return to the whiteboard to update events as they were discovered and to “connect the dots” helped our team to discover an ever-increasing amount of signal within the noise.
Alex Lynch says
Less markov chains please
Danielle Wright says
I agree with Keith. The first hurdle was figuring out where to start. We also used a white board to detail out what we thought may be occurring after the FBI notification and began working backwards from there.
We’ve been able to find several indicators using Wireshark, and we have a good idea of what happened and some mitigations that can be applied to help with damage control.
We managed to view one of the event files, which helps corroborate our claims, but the other two appear to be corrupted. We don’t know if they were meant to be this way or if there was an issue in the file transfer when we made our forensic copy of the evidence.
A couple of issues we’ve had: We’re trying to find further evidence to support our claims, and we’re having some issues with the memory dumps. We haven’t been able to get the Volatility Memory Analysis tool working in order to analyze the memory dump. This could be due to lack of knowledge/exposure of the tool. We’re also having some minor problems with the provided VMs in terms of accessing the accounts on the VMs. Is anyone else having these issues?
Joseph Mayes says
I think that, aside from the technical issues, there also was the logistics of schools starting on different dates and the effects that has on bringing teams together. From what I’ve heard, some schools were already in session before our 8/31 start date, and other schools aren’t starting until mid-September or later.
Joseph Mayes says
One other item: In the Q and As, the following was posted–
•Phase I: About 16 – 22 hours total per team
•Phase II: About 8 – 12 hours per individual
•Phase III: About 6 – 8 hours of preparation and attendance at challenge
Given the fact that the data we were actually provided was described in bridge calls as having ‘more in the data than you could possible find’, the estimates above (especially for Phase 1) seem unrealistic. People will naturally feel a pressure the find as much as possible, meaning the 16-22 hours per team is an understatement od the effort needed to get through Phase 1, especially as there is an achievement-based ‘winnowing’ process between phases 1 and 2. I feel I wasn’t equipped to tell my student team the amount of commitment they would actually have to make to be successful in phase 1.
Keith M. Hoodlet says
Joseph,
I feel the estimated amount of work for Phase I is probably accurate for those teams that approached the challenge with a strategy in mind. This is where going to the whiteboard really helped out our team. We discussed what we knew, what we might expect to see, what indicators we should look for, and how to look for those indicators.
Armed with that knowledge and a strategy for moving forward, the amount of time should be close to accurate.
Cheers,
Keith
Brendan S. McDermott says
Our team brought a diversity of skills and experience that helped us continue to move the project forward. Some of us have been working together for a year or more already, so we had a pretty good work flow established before we even got started. I found it important to focus on the “what does this mean to the company” aspect as we began to uncover more and more details of the exploit.
As with any big project, there are only so many hours before the deadline, so, even though there were plenty more things to find, we needed to summarize and submit the report before having “all” of the relevant information. Our team’s experience prepared us for this as well, as drafts of the presentation were being iterated in parallel with the technical discoveries.
Kell Rozman says
Our team learned about the challenge at the last minute and that required us to quickly come together, prepare our computing environment, research tools, and quickly learn how to use them before we could start analyzing the data. It would have been nice to know what a typical computer setup in advance so we could have our labs setup properly. We found the focus on a CEO level presentation to be a nice challenge because we had a lot of documentation of what occurred but it forced us to convey what is important in a non-technical way. Given the timeline to complete phase I, we felt like we were able to understand enough of what happened and present the information in a professional manner. Overall, phase I was a challenging but rewarding experience for our team.
Simeon Kakpovi says
This was a very fun and and rewarding challenge for us. We had a little of trouble initially setting up a good working environment and finding the correct tools to analyze the data. The amount of data given was intimidating because our team did not have as much experience as some other teams in this arena.
However once we got our ideas down on a whiteboard and started analyzing the data, the process became a lot more fluid. It was very exciting to connect what was happening in the different sets of data to create one logical story.
Jason Johnson says
It’s definitely been an interesting experience. The amount of data was large, but the scenario provided just enough direction to get everything unraveled pretty well. One thing that I think might have been a problem is a feeling of contradictory goals – the C level presentation makes it hard for us to discuss everything we found in enough detail to prove to you that we really found it, and some of the Rules of Engagement were a bit hampering (for example, being told not to examine private or confidential files should rule out digging into company emails, even though this is the sort of thing that would help with the analysis and threat assessment). Of course, these are probably also the sorts of contradictions that have to be dealt with in the real world, so that’s maybe of value as an educational experience.
Similarly, the presentation length also was tricky, since we found a lot of things that would merit further investigation and discussion with the technical staff of the company, but which had to be cut for space to deal with the things we absolutely knew were part of the attack. That’s good from the educational perspective, but less so from the competition perspective. Although part of the challenge is likely in trying to blend those two, it creates a deeply unpleasant sense of, “Should we include this? Should we not?” that a real world situation would lack because there’s an opportunity for dialogue there. I think we went over that time estimate by a pretty hefty margin, partially because of the finality that goes with a contest submission.
Eli Sohl says
It feels a bit weird discussing an ongoing competition in a forum where all the other teams can and will read what we say. Apologies if I’m a bit light on details here.
This first phase seemed like it went well for our team. There was just enough time to bring everyone up to speed on the relevant tools during the competition, so that even novice team members could make valuable contributions. We were impressed by the realism and scope of the dataset (Markov babble notwithstanding), which was a lot of fun to work through. Figuring out timezone data so that important events could be put into a single localized timeline was an interesting challenge, too.
One cool consequence of the timeframe and the dataset’s scale and interconnectedness was that, even though we organized into subgroups, everyone was able to operate as something of a generalist. If someone found an interesting new piece of data, often other team members would be able to duplicate the finding, connect it to whatever part of the dataset they were looking at, and in this way put together a more cohesive picture. Good communication here was absolutely key.
Probably the biggest challenge for us was taking what we had discovered and synthesizing all these details into a high-level narrative appropriate for the final deliverable. One thing that we kept saying was that we wished we could just sit down with a few organizers and just say, hey, look at all this crazy stuff we’ve found! A whole lot of interesting data, like interesting indicators of compromise gleaned from the memory dumps using Volatility, had to be cut or only obliquely alluded to in order to avoid oversaturating the presentation. I was surprised, and I think the rest of the team was as well, by just how much data we had that was important to our analysis process but, post-analysis, ended up not fitting in the final deliverable.
At the same time, though, we did appreciate how the final presentation kept up the realism of the challenge, since in a real incident response the final step would be to put together such a high-level presentation for management. This phase definitely taught us a lot of lessons that we’re planning to take advantage of going forward.
Julian Rrushi says
Our students appreciate the realism of the first phase of the challenge. The forensics data were quite similar to what is encountered in a real-world cyber analysis. The kind of vulnerabilities exploited by the intrusions are real and current as well.
Aaron Dean says
I felt that the challenge provided a great degree of hands on experience that could not have been readily obtained without the structure of a challenge such as this. I’d like to thank Lockheed Martin and Temple University for organizing the event.
I felt that the dataset provided was large, and possible too large. Nonetheless, It was a great feeling being able to pull out the relevant information piece together a story of the attacks and how they happened.
None of the members of our team had much computer/network forensics experience when the challenge began, but we have all developed many marketable skills from this exercise.
The presentation to executive management was a great idea, as the ability to relate sometimes complex technical topics to management is an important skill.