HID proximity cards encode a facility code and internal card number in hex on most cards. Decoding it is extremely easy and should take less than a minute. Equipent Needed: Omnikey Reader (I like the 5025CL) RFIDIOT BRIVO Card Calculator Steps: Run isotype.py from the RFIDIot tool kit and copy the ID: Past the ID into the BRIVO decoder: It is really that simple. A made a quick video demo (that is tinted purple for some reason): https://www.youtube.com/watch?v=0wmRDdAsur0 I have some writeable HID proximity cards on the way and will have a blog up soon on how to completely clone one.
I have recently started investigating RFID security and picked up a Chameleon Mini. It is an amazing project with a ton of potential. In these quick demo videos I will show how to clone the UID of both a Mifare 1K 4B card and a Mifare 1K 7B card using the Chameleon. Cloning the Mifare 1K UID (Aria Card): [embed]https://www.youtube.com/watch?v=zqVXoF7_EqE[/embed] Cloning the Mifare 1K 7B UID (Oyster Card): https://www.youtube.com/watch?v=iv5MKq9RV8I These were both extremely simple to do. In the future I will be demoing how to take full card dumps from an RFID card and load it on to the Chameleon Mini for a "true clone". Tool List: ACR122U ChamelemonMini ZTerm LibNFC Cardpeek Oyster Card Aria Card Hardware Picture: Disclaimer: While cloning the UID isnt a full spoof of the card WAY (READ:MOST) more organizations rely on UID based authentication then should. While the tools say the UIDs have been cloned I have not tested these on any live systems and would not without permission.
How the conversation goes when someone tells me they want to get into “hacking”: [embed]https://www.youtube.com/embed/tJ65q9RtqaY[/embed]
Yesterday Randy Westergren wrote this blog post: United Airlines Bug Bounty: An experience in reporting a serious vulnerability. I do not know Randy and do not think he did anything wrong but his post is a perfect example of why companies I talk to are afraid of implementing bug bounty programs. He hit the trinity of why companies fear bug bounty programs in one post:
- Their development cycle wasn't fast enough for the researcher. Is six months a "more than reasonable time frame"? On the surface sure but unless you go to their planning games, know their regulatory commitments, roadmap and backlog you can not say that for sure.Most companies have enough internal and contractual pressure on their development cycles to have a researcher who is "helping" add another source.
- The researcher involved the press:Companies do not want to be in the press for having poor security. So sure when he contacted the press they fixed the issue but it didn't win him or security researchers any friends at United.Companies do not want to manage a bug bounty program as a fire fighting exercise. They want to intake the bugs into their regular development cycle and work them in their normal process.
- The researcher went "rogue": He wasn't going to get compensated for his work since it was a duplicate so the only kind of compensation he could still get was to go public. Companies cant pay for every duplicate bug found and it only takes one researchers to go rogue to sour a bug bounty program for a company.
I was invited to attend the 2015 Digicert Security Summit this week in Las Vegas. For a one day conference it had some really amazing talks by some of the smartest people in the industry. Gary McGraw gave an amazing talk on security software development life cycle and the Building Security in Maturity Model (BSIMM). Emily Stark talked about the future of HTTPS everywhere and demoed the new security tab in the developer menu in chrome: Dan Kaminsky did Dan Kaminsky stuff. Runa Sandvik gave an amazing humorous, thought provoking and informative talk on protecting press sources on the internet. Digicert also gave me this iOS controlled drone which seems to be amazingly hackable: