Symantec and Intel are jointly developing security products that could be built into tiny computer microprocessors, Symantec Vice President Rowan Trollope said on Tuesday.
The program, dubbed Project Hood, is part of an effort by both companies to expand their use of virtualization technology, or using software to replicate entire computer systems.
They are developing software security "appliances" that would work with virtualization technology that Intel is already incorporating into its computer chips, Trollope said.
Appliances are specialized computers that handle tasks such as storing data, streaming music or securing a network.
Instead of designing the security software to run on Microsoft's Windows or another operating system, Symantec and Intel are building it so it can directly interact with the Intel chips.
"It runs underneath and alongside the operating system," Trollope said.
The companies are developing the products for use on servers and business desktop computers, though they may eventually expand the effort to consumer PCs, he said.
Edit By: Adrian Flucuş
Internet security 2011, PC security softwares, PC security articles, talk about the PC security, all about PC security...
8/15/2007
[Network Security]U.N. Internet sites hit by hackers
Hackers breached the United Nations Web site during the weekend, prompting the world body on Monday to stop posting new information while technicians evaluated the system, U.N. officials said.
Early on Sunday, the hackers defaced the official Web site on pages reserved for Secretary-General Ban Ki-moon with slogans accusing the United States and Israel of killing children.
The United Nations quickly removed the hackers' messages and on Monday stopped updating the site while the system was assessed, U.N. spokeswoman Michele Montas said.
In addition to the main U.N. site, the web pages for the Economic and Social Council and the Paris Web site of the U.N. Environment Program were also attacked, Montas said.
She said U.N. investigations were underway and "quick action was taken to prevent damage to the computer system." Key financial information was not affected, she said.
A repeating message on the secretary-general's page read: "Hacked By kerem125 M0sted and Gsy That is CyberProtest Hey Ysrail and Usa, Dont kill children and other people Peace for ever No war" according to snapshots of the site by bloggers.
One of the three hackers claimed to be Turkish.
CNET, a computer and technology publisher. "The perpetrators appeared to have used a well-known and highly preventable technique called SQL injection, which takes advantage of flawed database programming to activate malicious lines of code," CNET said on its web site.
The defacements, which affected the secretary-general's site and news pages (HTTP://www.UN.org/news) were cleaned within hours, Montas said.
In an e-mail to CNET's news.com Web site, Giorgio Maone, an Italian software developer who has worked with the world body, said, “The U.N. staff just deployed a cosmetic patch, which hides it from the most obvious tests, but it cannot prevent an attack” and said he had offered his assistance.
At the Web site www.M0sted.org, there is a list of sites allegedly hacked by the group, including Harvard and other universities and Norfolk and Norwich University Hospital in Britain, CNET said.
Edit By: Adrian Flucuş
Early on Sunday, the hackers defaced the official Web site on pages reserved for Secretary-General Ban Ki-moon with slogans accusing the United States and Israel of killing children.
The United Nations quickly removed the hackers' messages and on Monday stopped updating the site while the system was assessed, U.N. spokeswoman Michele Montas said.
In addition to the main U.N. site, the web pages for the Economic and Social Council and the Paris Web site of the U.N. Environment Program were also attacked, Montas said.
She said U.N. investigations were underway and "quick action was taken to prevent damage to the computer system." Key financial information was not affected, she said.
A repeating message on the secretary-general's page read: "Hacked By kerem125 M0sted and Gsy That is CyberProtest Hey Ysrail and Usa, Dont kill children and other people Peace for ever No war" according to snapshots of the site by bloggers.
One of the three hackers claimed to be Turkish.
CNET, a computer and technology publisher. "The perpetrators appeared to have used a well-known and highly preventable technique called SQL injection, which takes advantage of flawed database programming to activate malicious lines of code," CNET said on its web site.
The defacements, which affected the secretary-general's site and news pages (HTTP://www.UN.org/news) were cleaned within hours, Montas said.
In an e-mail to CNET's news.com Web site, Giorgio Maone, an Italian software developer who has worked with the world body, said, “The U.N. staff just deployed a cosmetic patch, which hides it from the most obvious tests, but it cannot prevent an attack” and said he had offered his assistance.
At the Web site www.M0sted.org, there is a list of sites allegedly hacked by the group, including Harvard and other universities and Norfolk and Norwich University Hospital in Britain, CNET said.
Edit By: Adrian Flucuş
[Network Security]Study finds kids justify illegal downloads
Children in Europe are aware of the risks of illegal downloading, but often rationalize their act by saying that everyone — including their parents — is doing it, according to a major European Commission survey.
Other excuses included: the download is for personal and private purposes; the Web sites presumably remunerate the artists; claims of harm inflicted on artists lack credibility; and DVDs and CDs are simply too expensive.
Almost all of the children surveyed in the 27 European Union member countries as well as in Norway and Iceland said they expect to continue downloading. They also said the risk of downloading a virus was far more dissuasive than the risk of legal proceedings.
The survey results, released Friday, found that most kids use the Internet several times a day and, while Internet use is to some extent limited by parents, most own their own mobile phones, the use of which is largely unsupervised.
The survey also found that children are much more attuned to such potential online risks as security, viruses, identity theft and potential dangerous contact with strangers than parents imagine, and tend to know about the necessary precautions.
Edit By: Adrian Flucuş
Other excuses included: the download is for personal and private purposes; the Web sites presumably remunerate the artists; claims of harm inflicted on artists lack credibility; and DVDs and CDs are simply too expensive.
Almost all of the children surveyed in the 27 European Union member countries as well as in Norway and Iceland said they expect to continue downloading. They also said the risk of downloading a virus was far more dissuasive than the risk of legal proceedings.
The survey results, released Friday, found that most kids use the Internet several times a day and, while Internet use is to some extent limited by parents, most own their own mobile phones, the use of which is largely unsupervised.
The survey also found that children are much more attuned to such potential online risks as security, viruses, identity theft and potential dangerous contact with strangers than parents imagine, and tend to know about the necessary precautions.
Edit By: Adrian Flucuş
[Network Security]UK MPs urge Internet firms to tackle cyber "Wild West"
Internet companies, retailers and the government must do far more to protect people from the dangers of the "lawless Wild West" of cyberspace, an influential group of British Members of Parliament said on Friday.
In a critical report, the MPs said the government and industry have a "laissez-faire" approach to online crime that could lead to an “economically disastrous” loss of public confidence in the Internet.
With computer fraud growing more sophisticated, people have little hope of protecting themselves alone, the House of Lords' Science and Technology Committee said.
"You can't just rely on individuals to take responsibility for their own security," said committee Chairman Lord Broers. "They will always be outfoxed by the bad guys."
Cybercrime is one of the fastest growing criminal activities, not just in Britain, and includes a huge range of illegal activity including financial scams, computer hacking, downloading pornographic images, virus attacks, stalking by e-mail and creating websites that promote racial hatred.
The lawmakers said that industry - from software makers and Internet service providers to banks and shops - must do far more to protect customers.
And they criticised the government for insisting that responsibility for security rests with Internet users, who are often faced with a “bewildering” set of options.
"This is no longer realistic, and compounds the perception that the Internet is a lawless 'Wild West'," the report said.
The government must work with the European Union to see if more responsibility for security could be legally handed to computer and software makers, the report said.
A network of police computer laboratories should be set up to fight the "flourishing" online crime industry.
Senior police must get the extra funds needed to launch a central e-crime unit and a Web site where people could report online offences.
The report also highlighted the lack of clear figures on e-crime. The government should also make sure the courts are aware of the seriousness of the problem.
"The choice is either to intervene now…to keep the threat to the Internet under control, or to let it grow unchecked, and risk an economically disastrous, long-term loss of public confidence in the Internet," the report concluded.
Edit By: Adrian Flucuş
In a critical report, the MPs said the government and industry have a "laissez-faire" approach to online crime that could lead to an “economically disastrous” loss of public confidence in the Internet.
With computer fraud growing more sophisticated, people have little hope of protecting themselves alone, the House of Lords' Science and Technology Committee said.
"You can't just rely on individuals to take responsibility for their own security," said committee Chairman Lord Broers. "They will always be outfoxed by the bad guys."
Cybercrime is one of the fastest growing criminal activities, not just in Britain, and includes a huge range of illegal activity including financial scams, computer hacking, downloading pornographic images, virus attacks, stalking by e-mail and creating websites that promote racial hatred.
The lawmakers said that industry - from software makers and Internet service providers to banks and shops - must do far more to protect customers.
And they criticised the government for insisting that responsibility for security rests with Internet users, who are often faced with a “bewildering” set of options.
"This is no longer realistic, and compounds the perception that the Internet is a lawless 'Wild West'," the report said.
The government must work with the European Union to see if more responsibility for security could be legally handed to computer and software makers, the report said.
A network of police computer laboratories should be set up to fight the "flourishing" online crime industry.
Senior police must get the extra funds needed to launch a central e-crime unit and a Web site where people could report online offences.
The report also highlighted the lack of clear figures on e-crime. The government should also make sure the courts are aware of the seriousness of the problem.
"The choice is either to intervene now…to keep the threat to the Internet under control, or to let it grow unchecked, and risk an economically disastrous, long-term loss of public confidence in the Internet," the report concluded.
Edit By: Adrian Flucuş
8/12/2007
[Network OS]Novell wins rights to Unix copyrights
SCO also owes Novell for licensing revenue paid by Sun and Microsoft
August 11, 2007 - Novell Inc. won a significant ruling in its lengthy battle with The SCO Group Inc. on Friday.
A judge in the U.S. District Court for the District of Utah Central District found that Novell is the owner of the Unix and UnixWare copyrights, dismissing SCO's charges of slander and breach of contract.
The judge also ruled that SCO owes Novell for SCO's licensing revenue from Sun Microsystems Inc. and Microsoft Corp. SCO is obligated to pass through to Novell a portion of those licenses, the judge said.
In the ruling, the judge said SCO must pay Novell, but the amount will be determined in a trial, said Pamela Jones, founder and editor of Groklaw, a Web site that follows open-source software legal issues.
In another major blow to SCO, the judge said that because Novell is the owner of the Unix copyrights, it can direct SCO to waive its suits against IBM Corp. and Sequant. "SCO can't sue IBM for copyright infringement on copyrights it doesn't own," Jones said.
The ruling is good news for organizations that use open-source software products, said Jim Zemlin, executive director of the Linux Foundation. "From the perspective of someone who is adopting open-source solutions to run in the enterprise, it proves to them that the industry is going to defend the platform, and that when organizations attack it from a legal perspective, that the industry collectively will defend it," he said.
The decision is "abysmal" news for SCO, according to Zemlin. "Their future is looking bleak," he said.
SCO, which may still appeal Friday's decision, did not reply to requests for comment.
In a statement, Novell said the ruling cut out the core of SCO's case and in the process eliminated SCO's threat to the Linux community.
Still outstanding are several counterclaims. For example, Novell's slander of title counterclaim against SCO is still ongoing and will go to trial, Jones said.
The case is so complex that the judge asked the parties to file a document with what they think is outstanding in the IBM case, Jones said. Those documents must be filed by Aug. 31.
The battle began in 2003 when SCO filed a suit against IBM claiming that it had violated SCO's rights by contributing Unix code to Linux. The following year, SCO sued Novell, saying that Novell falsely claimed it owned rights to Unix.
Edit By: Nancy Gohring
August 11, 2007 - Novell Inc. won a significant ruling in its lengthy battle with The SCO Group Inc. on Friday.
A judge in the U.S. District Court for the District of Utah Central District found that Novell is the owner of the Unix and UnixWare copyrights, dismissing SCO's charges of slander and breach of contract.
The judge also ruled that SCO owes Novell for SCO's licensing revenue from Sun Microsystems Inc. and Microsoft Corp. SCO is obligated to pass through to Novell a portion of those licenses, the judge said.
In the ruling, the judge said SCO must pay Novell, but the amount will be determined in a trial, said Pamela Jones, founder and editor of Groklaw, a Web site that follows open-source software legal issues.
In another major blow to SCO, the judge said that because Novell is the owner of the Unix copyrights, it can direct SCO to waive its suits against IBM Corp. and Sequant. "SCO can't sue IBM for copyright infringement on copyrights it doesn't own," Jones said.
The ruling is good news for organizations that use open-source software products, said Jim Zemlin, executive director of the Linux Foundation. "From the perspective of someone who is adopting open-source solutions to run in the enterprise, it proves to them that the industry is going to defend the platform, and that when organizations attack it from a legal perspective, that the industry collectively will defend it," he said.
The decision is "abysmal" news for SCO, according to Zemlin. "Their future is looking bleak," he said.
SCO, which may still appeal Friday's decision, did not reply to requests for comment.
In a statement, Novell said the ruling cut out the core of SCO's case and in the process eliminated SCO's threat to the Linux community.
Still outstanding are several counterclaims. For example, Novell's slander of title counterclaim against SCO is still ongoing and will go to trial, Jones said.
The case is so complex that the judge asked the parties to file a document with what they think is outstanding in the IBM case, Jones said. Those documents must be filed by Aug. 31.
The battle began in 2003 when SCO filed a suit against IBM claiming that it had violated SCO's rights by contributing Unix code to Linux. The following year, SCO sued Novell, saying that Novell falsely claimed it owned rights to Unix.
Edit By: Nancy Gohring
[Network Security]'Hackers' deface UN site
Some sections still offline hours after Turkish trios uses SQL injection attack
August 12, 2007 - "Hackers" defaced the United Nations' Web site early Sunday with messages accusing the U.S. and Israel of killing children. As of late afternoon, some sections, including the area devoted to Secretary General Ban Ki-Moon, remained offline.
The attack, spelled out by an Italian software developer on his blog and later reported by the BBC, replaced blurbs of recent speeches by Ban with text attributed to a trio of would-be hackers.
HACKED BY KEREM125 M0STED AND GSY
THAT IS CYBERPROTEST HEY ÝSRAIL AND USA
DONT KILL CHILDREN AND OTHER PEOPLE
PEACE FOR EVER
NO WAR
The section of the U.N.'s site dedicated to Ban was still offline as of 5 p.m. EDT Sunday. It sported a message reading: "This site will be temporarily unavailable due to scheduled maintenance."
Giorgio Maone, a software developer from Palermo, Italy, noted the incident timeline and posted several screenshots of the defacement on his blog. Maone pegged the attack as an SQL injection exploit, which let the attackers add their own HTML code to the site. SQL injection attacks are a common tactic by defacers, and have been used against numerous government and commercial sites worldwide. In June, Microsoft Corp.'s U.K. Web site was defaced by an SQL injection.
"There's a technical reason for the missing apostrophe [in DON'T], though, because messing with this very character (') is part of the technique apparently used by the attackers," said Maone in his blog post Sunday. "The [U.N.'s] site is vulnerable to an [SQL injection] attack...this is a very well known kind of vulnerability, fairly easy to avoid and very surprising to find in such a high profile site.
"Moreover, the hole seems not to be patched yet, thus the site could be defaced again at will," Maone added.
In the past, "Kerem125," "m0sted" and "gsy," are names that have been used by would-be hackers claiming to be from Turkey, said the BBC. An Australian insurance company, for example, had its site defaced in late July by a group that included kerem125.
The U.N. could not be reached Sunday for comment.
While site defacing is common, large-scale attacks have been rare. Last year, however, nearly 1,000 Danish sites were defaced by Islamic attackers who protested controversial cartoons that featured the Prophet Mohammed. And in 2001, a month-long defacement dustup raged between Chinese and American entities after a U.S. spy plane was forced down by Chinese fighters.
Edit By: Gregg Keizer
August 12, 2007 - "Hackers" defaced the United Nations' Web site early Sunday with messages accusing the U.S. and Israel of killing children. As of late afternoon, some sections, including the area devoted to Secretary General Ban Ki-Moon, remained offline.
The attack, spelled out by an Italian software developer on his blog and later reported by the BBC, replaced blurbs of recent speeches by Ban with text attributed to a trio of would-be hackers.
HACKED BY KEREM125 M0STED AND GSY
THAT IS CYBERPROTEST HEY ÝSRAIL AND USA
DONT KILL CHILDREN AND OTHER PEOPLE
PEACE FOR EVER
NO WAR
The section of the U.N.'s site dedicated to Ban was still offline as of 5 p.m. EDT Sunday. It sported a message reading: "This site will be temporarily unavailable due to scheduled maintenance."
Giorgio Maone, a software developer from Palermo, Italy, noted the incident timeline and posted several screenshots of the defacement on his blog. Maone pegged the attack as an SQL injection exploit, which let the attackers add their own HTML code to the site. SQL injection attacks are a common tactic by defacers, and have been used against numerous government and commercial sites worldwide. In June, Microsoft Corp.'s U.K. Web site was defaced by an SQL injection.
"There's a technical reason for the missing apostrophe [in DON'T], though, because messing with this very character (') is part of the technique apparently used by the attackers," said Maone in his blog post Sunday. "The [U.N.'s] site is vulnerable to an [SQL injection] attack...this is a very well known kind of vulnerability, fairly easy to avoid and very surprising to find in such a high profile site.
"Moreover, the hole seems not to be patched yet, thus the site could be defaced again at will," Maone added.
In the past, "Kerem125," "m0sted" and "gsy," are names that have been used by would-be hackers claiming to be from Turkey, said the BBC. An Australian insurance company, for example, had its site defaced in late July by a group that included kerem125.
The U.N. could not be reached Sunday for comment.
While site defacing is common, large-scale attacks have been rare. Last year, however, nearly 1,000 Danish sites were defaced by Islamic attackers who protested controversial cartoons that featured the Prophet Mohammed. And in 2001, a month-long defacement dustup raged between Chinese and American entities after a U.S. spy plane was forced down by Chinese fighters.
Edit By: Gregg Keizer
8/09/2007
[OS Related]Dell expands Linux PC sales to Europe, China
Dell said on Tuesday it has decided to expand sales of consumer personal computers loaded with the Linux operating system to the United Kingdom, France, Germany and China.
The world's No. 2 PC maker started selling Ubuntu Linux PCs to consumers in May in a program that was limited to the United States. Ubuntu is a free version of the software.
Linux software, the main rival to Microsoft's market-dominating Windows, has been one of the fastest-growing types of software on business computers over the past decade.
But it has yet to gain a foothold in the consumer market, where Windows sits on more than 90 percent of personal computers.
Dell says that so far the bulk of its U.S. Linux sales have to open-source enthusiasts. They tend to like the software because it is free, thousands of compatible programs are also free, and it is easy to customize.
But Dell says that a small number of Linux buyers are first-timers interested in trying out an alternative to Windows. If that group grows it could hurt Microsoft's profit growth.
Every PC that is sold with Linux installed on it instead of Windows means one less license fee payment from a PC maker.
Microsoft and PC makers don't disclose the size of those license fees. Retail versions of Windows Vista generally sell for $200 to $400.
On Monday No. 3 PC maker Lenovo Group said it would introduce a broad line of Linux laptops, the strongest endorsement to date of the open-source software by a major PC maker.
Dell said that the Linux machines it sells in Europe, which went on sale Tuesday, come with Ubuntu Linux.
Customers in China will be sold PCs factory-installed with Novell Inc's Suse Linux. The company did not say when those will go on sale.
Dell made the announcement in San Francisco at the annual LinuxWorld convention.
Edit By: Adrian Flucuş
The world's No. 2 PC maker started selling Ubuntu Linux PCs to consumers in May in a program that was limited to the United States. Ubuntu is a free version of the software.
Linux software, the main rival to Microsoft's market-dominating Windows, has been one of the fastest-growing types of software on business computers over the past decade.
But it has yet to gain a foothold in the consumer market, where Windows sits on more than 90 percent of personal computers.
Dell says that so far the bulk of its U.S. Linux sales have to open-source enthusiasts. They tend to like the software because it is free, thousands of compatible programs are also free, and it is easy to customize.
But Dell says that a small number of Linux buyers are first-timers interested in trying out an alternative to Windows. If that group grows it could hurt Microsoft's profit growth.
Every PC that is sold with Linux installed on it instead of Windows means one less license fee payment from a PC maker.
Microsoft and PC makers don't disclose the size of those license fees. Retail versions of Windows Vista generally sell for $200 to $400.
On Monday No. 3 PC maker Lenovo Group said it would introduce a broad line of Linux laptops, the strongest endorsement to date of the open-source software by a major PC maker.
Dell said that the Linux machines it sells in Europe, which went on sale Tuesday, come with Ubuntu Linux.
Customers in China will be sold PCs factory-installed with Novell Inc's Suse Linux. The company did not say when those will go on sale.
Dell made the announcement in San Francisco at the annual LinuxWorld convention.
Edit By: Adrian Flucuş
[PC Related]Apple unveils redesigned iMac desktops
Apple unveiled a line of slimmer desktop computers on Tuesday in a long-expected update of its iMac brand, positioning it for the back-to-school and holiday shopping seasons.
The new iMacs, which will sport thinner aluminum casings, have displays measuring 20 inches and 24 inches and will cost $1,199 to $1,799, depending on their configurations, said Apple Chief Executive Steve Jobs at a media event at Apple’s headquarters in Cupertino, California.
The cost of the 24-inch iMac has been dropped by $200, and Apple has eliminated the 17-inch iMac computer, Jobs said.
The last update to the iMac line was in September 2006, when Apple introduced a model with a 24-inch screen - its largest - and said the entire model line would be powered by Intel chips instead of ones from International Business Machines Corp. .
"Apple has grown two to three times the market for the past several quarters," said analyst Shannon Cross of Cross Research. "This product launch should position them well for the back-to-school and holiday seasons."
Apple recently launched the iPhone mobile device in a bid to build a third major product line alongside its Macintosh computers and iPod media players, but desktop and laptop sales still account for the bulk of its revenue.
In its third quarter, Apple sold 634,000 desktops for revenue of $956 million, accounting for about 18 percent of total revenue.
"The iMac has been really successful for us and we'd like to make it even better," Jobs said. "We've managed to make it even thinner than before."
Apple laptop sales totaled $1.58 billion in its most recently reported quarter. The MacBook laptop line was not affected by Tuesday’s announcement.
Sales of Macintosh computers have grown faster than the overall PC market, but Apple's share of the market by unit sales is estimated to be less than 5 percent.
Apple has also used the iPod and, now, the iPhone as "halo" products to draw customers into stores and get them interested in its computers.
Jobs also said that the company was adding a software "button" to the iPhone that allows users to upload photos taken with the built-in camera on the iPhone to Apple's .Mac online data and Web-hosting service.
Apple shares rose $1.30 to $136.55 in afternoon trading on Nasdaq. The stock has risen 59 percent so far this year, largely on anticipation of strong demand for the iPhone and that enthusiasm for the device will translate into stronger sales of other Apple products.
Edit By: Adrian Flucuş
The new iMacs, which will sport thinner aluminum casings, have displays measuring 20 inches and 24 inches and will cost $1,199 to $1,799, depending on their configurations, said Apple Chief Executive Steve Jobs at a media event at Apple’s headquarters in Cupertino, California.
The cost of the 24-inch iMac has been dropped by $200, and Apple has eliminated the 17-inch iMac computer, Jobs said.
The last update to the iMac line was in September 2006, when Apple introduced a model with a 24-inch screen - its largest - and said the entire model line would be powered by Intel chips instead of ones from International Business Machines Corp. .
"Apple has grown two to three times the market for the past several quarters," said analyst Shannon Cross of Cross Research. "This product launch should position them well for the back-to-school and holiday seasons."
Apple recently launched the iPhone mobile device in a bid to build a third major product line alongside its Macintosh computers and iPod media players, but desktop and laptop sales still account for the bulk of its revenue.
In its third quarter, Apple sold 634,000 desktops for revenue of $956 million, accounting for about 18 percent of total revenue.
"The iMac has been really successful for us and we'd like to make it even better," Jobs said. "We've managed to make it even thinner than before."
Apple laptop sales totaled $1.58 billion in its most recently reported quarter. The MacBook laptop line was not affected by Tuesday’s announcement.
Sales of Macintosh computers have grown faster than the overall PC market, but Apple's share of the market by unit sales is estimated to be less than 5 percent.
Apple has also used the iPod and, now, the iPhone as "halo" products to draw customers into stores and get them interested in its computers.
Jobs also said that the company was adding a software "button" to the iPhone that allows users to upload photos taken with the built-in camera on the iPhone to Apple's .Mac online data and Web-hosting service.
Apple shares rose $1.30 to $136.55 in afternoon trading on Nasdaq. The stock has risen 59 percent so far this year, largely on anticipation of strong demand for the iPhone and that enthusiasm for the device will translate into stronger sales of other Apple products.
Edit By: Adrian Flucuş
8/08/2007
[Network Related]China hopes to cure Internet addicts at summer camp
China is launching an experimental summer camp for 40 youngsters to try to wean them off their Internet addiction, state media said on Tuesday.
The 10-day program would accept youngsters aged between 14 and 22 once they had undergone a psychological test and evaluation, the China Daily said.
About 2.6 million — or 13 percent — of China’s 20 million Internet users under 18 are classed as addicts, state media have reported.
The youngsters at the summer camp would be treated for depression, fear, unwillingness to interact with others, panic and agitation.
It would appear to be offering a softer option than the Internet Addiction Treatment Centre near Beijing which uses a blend of therapy and military drills to treat children addicted to online games, Internet pornography and cybersex.
Concerned by a number of high-profile Internet-related deaths and juvenile crime, the government is now taking steps to stem Internet addictions by banning new Internet cafes and mulling restrictions on violent computer games.
According to government figures, there are currently 113,000 Internet cafes and bars in China.
The newspaper cited the case of one student accepted to East China University of Science and Technology with high marks.
“He could not adjust to Shanghai campus life without burying himself in computer games,” the China Daily said. “He would play day and night, skipping classes and avoiding friends, until he was pulled out of the Internet cafe by a supervisor.”
In a joint effort with the camp, Shanghai’s education commission has organized a volunteer group to patrol the city streets and stop minors entering Internet cafes.
Edit By: Adrian Flucuş
The 10-day program would accept youngsters aged between 14 and 22 once they had undergone a psychological test and evaluation, the China Daily said.
About 2.6 million — or 13 percent — of China’s 20 million Internet users under 18 are classed as addicts, state media have reported.
The youngsters at the summer camp would be treated for depression, fear, unwillingness to interact with others, panic and agitation.
It would appear to be offering a softer option than the Internet Addiction Treatment Centre near Beijing which uses a blend of therapy and military drills to treat children addicted to online games, Internet pornography and cybersex.
Concerned by a number of high-profile Internet-related deaths and juvenile crime, the government is now taking steps to stem Internet addictions by banning new Internet cafes and mulling restrictions on violent computer games.
According to government figures, there are currently 113,000 Internet cafes and bars in China.
The newspaper cited the case of one student accepted to East China University of Science and Technology with high marks.
“He could not adjust to Shanghai campus life without burying himself in computer games,” the China Daily said. “He would play day and night, skipping classes and avoiding friends, until he was pulled out of the Internet cafe by a supervisor.”
In a joint effort with the camp, Shanghai’s education commission has organized a volunteer group to patrol the city streets and stop minors entering Internet cafes.
Edit By: Adrian Flucuş
[Network]Novell Certifies AMD Validated Server Platforms
AMD announced the availability of AMD Validated Server platforms certified for SUSE Linux Enterprise Server from Novell.
AMD will deliver solution providers a validated server platform with Novell “YES Certified” designation that is compatible with SUSE Linux Enterprise Server and will be fully supported by both AMD and Novell.
Together, AMD and Novell are providing solution providers with more choice through an easier, faster and less expensive process by which to obtain certified and pre-configured Linux-based server platforms. Having completed the Novell YES Certified testing on AMD Validated Server barebones platforms, Supermicro Computer, Tyan Computer Corporation and Uniwide Technologies are the first original design manufacturers to offer these platforms. Solution providers will now have the ability to leverage this testing and offer configured Novell YES Certified SKUs, expediting the certification process dramatically.
AMD Validated Server platforms for SUSE Linux Enterprise Server represent another milestone in the relationship between AMD and Novell. AMD’s alignment with the Novell YES Certified program will help broaden the AMD Validated Server program’s reach into the Linux community, enabling solution providers to offer their customers the choice of high-quality, reliable, and stable open-standard server solutions based on state-of-the-art technology.
“The combination of two powerful channel programs like AMD’s Validated Server program and the Novell YES Certified program offers solution providers the ability to sell customized server platforms with the highest degree of confidence,” said Justin Steinman, director of marketing for Linux and Open Platform Solutions at Novell.
The AMD Validated Server program enables tested and validated solutions based on AMD Opteron processors to help solution providers build high-quality, reliable commercial solutions. By leveraging the program, solution providers have the opportunity to lower development costs, streamline development cycles and accelerate overall time to market.
Edit By: Horia Covaci
AMD will deliver solution providers a validated server platform with Novell “YES Certified” designation that is compatible with SUSE Linux Enterprise Server and will be fully supported by both AMD and Novell.
Together, AMD and Novell are providing solution providers with more choice through an easier, faster and less expensive process by which to obtain certified and pre-configured Linux-based server platforms. Having completed the Novell YES Certified testing on AMD Validated Server barebones platforms, Supermicro Computer, Tyan Computer Corporation and Uniwide Technologies are the first original design manufacturers to offer these platforms. Solution providers will now have the ability to leverage this testing and offer configured Novell YES Certified SKUs, expediting the certification process dramatically.
AMD Validated Server platforms for SUSE Linux Enterprise Server represent another milestone in the relationship between AMD and Novell. AMD’s alignment with the Novell YES Certified program will help broaden the AMD Validated Server program’s reach into the Linux community, enabling solution providers to offer their customers the choice of high-quality, reliable, and stable open-standard server solutions based on state-of-the-art technology.
“The combination of two powerful channel programs like AMD’s Validated Server program and the Novell YES Certified program offers solution providers the ability to sell customized server platforms with the highest degree of confidence,” said Justin Steinman, director of marketing for Linux and Open Platform Solutions at Novell.
The AMD Validated Server program enables tested and validated solutions based on AMD Opteron processors to help solution providers build high-quality, reliable commercial solutions. By leveraging the program, solution providers have the opportunity to lower development costs, streamline development cycles and accelerate overall time to market.
Edit By: Horia Covaci
8/07/2007
[Network Security]Black Hat: Mozilla says it can patch flaws in 10 days
Sounds like a dare; observers attest to sobriety of exec making security vow
August 06, 2007 - A Mozilla Corp. executive has vowed that his company can patch any critical vulnerability in its software within 10 days, a sign that Mozilla may intend to step up its efforts to improve security.
Mozilla executive Mike Shaver backed up his claim by scrawling it on a business card at the Black Hat security conference in Las Vegas last week and handing it to Robert Hansen, CEO of SecTheory.com, who also runs the ha.ckers.org Web site. Hansen posted a photo (warning: linked URL and image contain expletive) of Shaver's business card, including the claim "Ten [expletive] days."
"I told him I would post his card -- and he didn’t flinch. No, he wasn’t drunk. He’s serious," Hansen wrote in his blog.
Web browser security has become increasingly important with the rise in use of Web-based applications, from Google Inc.'s Gmail to social networking sites such as Facebook.com and enterprise software-as-a-service programs such as Salesforce.com. A security vulnerability within a Web browser can put a user's data at risk and make a PC vulnerable to hackers.
Shaver's 10-day pledge applies to "critical" vulnerabilities, although there is no standard for such a rating, and different companies evaluate levels of risk in different ways. Another condition is that the vulnerability is disclosed responsibly, meaning Mozilla is notified of the issue before it is publicized.
The pledge sparked some debate about whether Mozilla will be able to keep to it.
"I've always been a fan of Mozilla and Firefox, however, this is a pretty bold claim for a company of any shape or size," Hansen wrote.
Other commentators said keeping the 10-day promise might not be easy. Patches need to be of high quality and tested properly, which could take more time depending on how severe the vulnerability is, said Graham Cluley, senior technology consultant for Sophos PLC.
"If that's what they're saying, then it is an audacious claim," Cluley said. "Some critical security vulnerabilities can reside deep in the bones of a complicated software product like Firefox and may require extensive testing to ensure that the highest-quality fix is being made available to the users."
Others had more confidence in Shaver's claim.
"Rome wasn’t built in one day, but heck, Firefox isn’t Rome," said a commentator on Hansen's blog. "And Mozilla has 10 whole days. I don’t know, put 20 geeks in front of a computer for 10 days and just watch them go."
Mozilla security chief Window Snyder said via email late Sunday night from the U.S. that Mozilla would comment further on the matter later Monday.
Mozilla updated Firefox twice in July. The last update, which came out July 30, fixed two problems that Mozilla labeled "critical," although it took about two weeks from when security researchers first posted exploit code for that update to be released.
Microsoft Corp. patches its operating system and applications on the second Tuesday of each month. The company sticks to the schedule, but has released off-schedule patches for particularly dangerous vulnerabilities.
Faster patching could help Mozilla gain a broader share of the browser market over Microsoft's Internet Explorer if administrators and users feel it's a safer option for cruising the Web. Firefox had a 27.8% share of the European market but just 18.7% in North America, according to recent statistics from XiTiMonitor, a French company that tracks browser market share.
Edit By: Jeremy Kirk
August 06, 2007 - A Mozilla Corp. executive has vowed that his company can patch any critical vulnerability in its software within 10 days, a sign that Mozilla may intend to step up its efforts to improve security.
Mozilla executive Mike Shaver backed up his claim by scrawling it on a business card at the Black Hat security conference in Las Vegas last week and handing it to Robert Hansen, CEO of SecTheory.com, who also runs the ha.ckers.org Web site. Hansen posted a photo (warning: linked URL and image contain expletive) of Shaver's business card, including the claim "Ten [expletive] days."
"I told him I would post his card -- and he didn’t flinch. No, he wasn’t drunk. He’s serious," Hansen wrote in his blog.
Web browser security has become increasingly important with the rise in use of Web-based applications, from Google Inc.'s Gmail to social networking sites such as Facebook.com and enterprise software-as-a-service programs such as Salesforce.com. A security vulnerability within a Web browser can put a user's data at risk and make a PC vulnerable to hackers.
Shaver's 10-day pledge applies to "critical" vulnerabilities, although there is no standard for such a rating, and different companies evaluate levels of risk in different ways. Another condition is that the vulnerability is disclosed responsibly, meaning Mozilla is notified of the issue before it is publicized.
The pledge sparked some debate about whether Mozilla will be able to keep to it.
"I've always been a fan of Mozilla and Firefox, however, this is a pretty bold claim for a company of any shape or size," Hansen wrote.
Other commentators said keeping the 10-day promise might not be easy. Patches need to be of high quality and tested properly, which could take more time depending on how severe the vulnerability is, said Graham Cluley, senior technology consultant for Sophos PLC.
"If that's what they're saying, then it is an audacious claim," Cluley said. "Some critical security vulnerabilities can reside deep in the bones of a complicated software product like Firefox and may require extensive testing to ensure that the highest-quality fix is being made available to the users."
Others had more confidence in Shaver's claim.
"Rome wasn’t built in one day, but heck, Firefox isn’t Rome," said a commentator on Hansen's blog. "And Mozilla has 10 whole days. I don’t know, put 20 geeks in front of a computer for 10 days and just watch them go."
Mozilla security chief Window Snyder said via email late Sunday night from the U.S. that Mozilla would comment further on the matter later Monday.
Mozilla updated Firefox twice in July. The last update, which came out July 30, fixed two problems that Mozilla labeled "critical," although it took about two weeks from when security researchers first posted exploit code for that update to be released.
Microsoft Corp. patches its operating system and applications on the second Tuesday of each month. The company sticks to the schedule, but has released off-schedule patches for particularly dangerous vulnerabilities.
Faster patching could help Mozilla gain a broader share of the browser market over Microsoft's Internet Explorer if administrators and users feel it's a safer option for cruising the Web. Firefox had a 27.8% share of the European market but just 18.7% in North America, according to recent statistics from XiTiMonitor, a French company that tracks browser market share.
Edit By: Jeremy Kirk
[Network Comment]Baidu may be worst copyright violator, says Wikipedia
Chinese search engine scarfing up entries without honoring GNU license
August 06, 2007 - Baidu.com Inc., which operates China's most popular Internet search engine, may be the worst violator of Wikipedia copyrights, the chair of the foundation behind popular online encyclopedia Wikipedia said Sunday, as she asked the company again to give credit where credit is due.
The Wikimedia Foundation, Inc. has no plans to take Baidu to court -- the group has never sued a copyright violator -- but it is asking more publicly for the Chinese search company to respect its copyright license by simply attributing Wikipedia entries that have been copied on Baidu Baike, the company's Chinese-language Web encyclopedia.
"They do not respect the license at all," said Florence Nibart-Devouard, chairman of the board of trustees of the Wikimedia Foundation, during an interview at the Wikimania 2007 conference in Taipei. "That might be the biggest copyright violation we have. We have others," she added.
Baidu Baike's rivals in China have also requested the company comply on the copyright issue. Hoodong.com, which develops Chinese-language wiki collaboration software and operates its own online encyclopedia in China, has been monitoring Baidu Baike for some time, said founder Pan Haidong, and has started capturing screen shots of violations as proof.
Baidu could not immediately be reached for comment.
Wikipedia editors have asked several times for Baidu to cite it when using Wikipedia content, but have received no replies nor seen any improvement from the Chinese company. Usually, an e-mail explaining Wikipedia's copyright license is enough to prompt most companies to respect it, but not in this case.
That's a problem for Wikipedia because not only is Baidu Baike the largest online Chinese-language encyclopedia, it also contains more articles than any Wikipedia except the English-language Wikipedia. Baidu Baike boasted 809,237 entries as of Sunday, edging out the German edition of Wikipedia, which has 619,612 entries, for second place. The Chinese version of Wikipedia hosts 139,131 articles.
Wikipedia also finds it difficult to compete against Baidu Baike due to strict censorship laws in China. The Chinese- and English-language versions of Wikipedia face being blocked in China without notification or explanation. Currently, the English site is available to users in China, although the Chinese-language site remains inaccessible.
"Since we are blocked in China, Wikipedia exists only on one other Web site there, and it is not ours," said Nibart-Devouard.
Baidu Baike faces the same restrictions on content, but operates in such a way as to avoid problems with Chinese authorities. Anyone wishing to publish entries on Baidu Baike must register first, giving the site people's real names, and site administrators review all entries before posting, a way to ensure compliance with Chinese censorship laws.
All entries published on Wikipedia fall under the GNU Free Documentation License, which allows other organizations a wide range of uses of the material, including for-profit publishing, with a few stipulations. Anyone using the material, particularly word-for-word copies and mirror Web sites, need to say the material came from Wikipedia.
By contrast, Baidu said all content generated on Baike becomes the property of Baidu. But the encyclopedia, which like Wikipedia relies on users for entries, also expressly warns users not to cut and paste other people's work, insists that all copyright laws be respected, and asks that sources used in all entries be properly cited. It expressly tells users that any contributions which quote works held under the GNU Free Documentation License, which Wikipedia uses, must follow the restrictions on that license.
Going forward, the Wikimedia Foundation plans to continue to try to communicate its issues with Baidu. A Chinese-language chapter of the foundation was created in August, which could more closely monitor Baidu Baike, point out and document any violations of Wikipedia's copyright policies. But beyond that, the foundation has no plans to use legal means to resolve the issue, mainly because legal issues are technically difficult.
"The foundation does not hold a copyright on the articles, the editors or the authors do, so there is very little we can do," said Nibart-Devouard, although she said that if pushed, the foundation could try some kind of class-action. In the meantime, the foundation will continue trying to nicely ask for its content on Baidu Baike to be properly cited as having come from Wikipedia, and will seek other peaceful methods to resolve the issue.
Edit By: Dan Nystedt
August 06, 2007 - Baidu.com Inc., which operates China's most popular Internet search engine, may be the worst violator of Wikipedia copyrights, the chair of the foundation behind popular online encyclopedia Wikipedia said Sunday, as she asked the company again to give credit where credit is due.
The Wikimedia Foundation, Inc. has no plans to take Baidu to court -- the group has never sued a copyright violator -- but it is asking more publicly for the Chinese search company to respect its copyright license by simply attributing Wikipedia entries that have been copied on Baidu Baike, the company's Chinese-language Web encyclopedia.
"They do not respect the license at all," said Florence Nibart-Devouard, chairman of the board of trustees of the Wikimedia Foundation, during an interview at the Wikimania 2007 conference in Taipei. "That might be the biggest copyright violation we have. We have others," she added.
Baidu Baike's rivals in China have also requested the company comply on the copyright issue. Hoodong.com, which develops Chinese-language wiki collaboration software and operates its own online encyclopedia in China, has been monitoring Baidu Baike for some time, said founder Pan Haidong, and has started capturing screen shots of violations as proof.
Baidu could not immediately be reached for comment.
Wikipedia editors have asked several times for Baidu to cite it when using Wikipedia content, but have received no replies nor seen any improvement from the Chinese company. Usually, an e-mail explaining Wikipedia's copyright license is enough to prompt most companies to respect it, but not in this case.
That's a problem for Wikipedia because not only is Baidu Baike the largest online Chinese-language encyclopedia, it also contains more articles than any Wikipedia except the English-language Wikipedia. Baidu Baike boasted 809,237 entries as of Sunday, edging out the German edition of Wikipedia, which has 619,612 entries, for second place. The Chinese version of Wikipedia hosts 139,131 articles.
Wikipedia also finds it difficult to compete against Baidu Baike due to strict censorship laws in China. The Chinese- and English-language versions of Wikipedia face being blocked in China without notification or explanation. Currently, the English site is available to users in China, although the Chinese-language site remains inaccessible.
"Since we are blocked in China, Wikipedia exists only on one other Web site there, and it is not ours," said Nibart-Devouard.
Baidu Baike faces the same restrictions on content, but operates in such a way as to avoid problems with Chinese authorities. Anyone wishing to publish entries on Baidu Baike must register first, giving the site people's real names, and site administrators review all entries before posting, a way to ensure compliance with Chinese censorship laws.
All entries published on Wikipedia fall under the GNU Free Documentation License, which allows other organizations a wide range of uses of the material, including for-profit publishing, with a few stipulations. Anyone using the material, particularly word-for-word copies and mirror Web sites, need to say the material came from Wikipedia.
By contrast, Baidu said all content generated on Baike becomes the property of Baidu. But the encyclopedia, which like Wikipedia relies on users for entries, also expressly warns users not to cut and paste other people's work, insists that all copyright laws be respected, and asks that sources used in all entries be properly cited. It expressly tells users that any contributions which quote works held under the GNU Free Documentation License, which Wikipedia uses, must follow the restrictions on that license.
Going forward, the Wikimedia Foundation plans to continue to try to communicate its issues with Baidu. A Chinese-language chapter of the foundation was created in August, which could more closely monitor Baidu Baike, point out and document any violations of Wikipedia's copyright policies. But beyond that, the foundation has no plans to use legal means to resolve the issue, mainly because legal issues are technically difficult.
"The foundation does not hold a copyright on the articles, the editors or the authors do, so there is very little we can do," said Nibart-Devouard, although she said that if pushed, the foundation could try some kind of class-action. In the meantime, the foundation will continue trying to nicely ask for its content on Baidu Baike to be properly cited as having come from Wikipedia, and will seek other peaceful methods to resolve the issue.
Edit By: Dan Nystedt
[Security News]$10 hack can unlock nearly any office door
More hardware fun from Defcon; biometric devices affected too
August 06, 2007 - Cut a couple of wires, insert a small, easy-to-make device between them, and you can walk right through all those supposedly card-protected locked office doors.
At the Defcon security conference over the weekend, a hacker and Defcon staffer who goes by the name Zac Franken showed off how a small homemade device he calls Gecko can perform a classic man-in-the-middle attack on the type of access card readers used on office doors around the country. Gecko is simply a small, programmable PIC chip with a wire connector on either side. Once it's connected to the wires behind the card reader, it's not only trivial to use a 'Replay' card to get through the door, but you can also disable the system so that nobody else can come in behind you.
What's more, making a Gecko is easy and cheap. Franken says the hardware costs about $10.
According to Franken, the hack subverts the Wiegand protocol, commonly used for communication between the card reader and the back-end access control system, and doesn't take direct advantage of any problems with any of the hardware involved. When you swipe your card at the office, the reader very likely sends a signal using the Wiegand protocol to the control system, when then opens the doors.
"The problem is, this is what we call a plain-text protocol," Franken says. "There's nothing secure about it."
For many card readers, getting Gecko in place is just a matter of popping off the reader's cover with a knife or screwdriver and undoing two screws, he says. That provides access to the wires that carry the signal from the reader to the control system.
In a real-world situation you'd quickly cut the wires and insert one cut end into one side of the Gecko, and the other cut end into the Gecko's other side. In Franken's demonstration he used pre-made connectors so he could easily disconnect and reconnect the device. When you put the reader's cover back, the Gecko would be hidden behind it.
The card reader also continues to work fine with the Gecko attached. It passes along the signal from the reader to the control system as it's supposed to. But when someone swipes an authorized card that unlocks the door, Gecko saves that signal.
With that saved unlock signal, the attacker can swipe a 'replay' card that tells Gecko to re-send that saved signal, and the doors unlock. What's more, any saved access logs would only show that the same person who originally swiped the saved signal swiped his card again.
August 06, 2007 (PC World) -- Cut a couple of wires, insert a small, easy-to-make device between them, and you can walk right through all those supposedly card-protected locked office doors.
At the Defcon security conference over the weekend, a hacker and Defcon staffer who goes by the name Zac Franken showed off how a small homemade device he calls Gecko can perform a classic man-in-the-middle attack on the type of access card readers used on office doors around the country. Gecko is simply a small, programmable PIC chip with a wire connector on either side. Once it's connected to the wires behind the card reader, it's not only trivial to use a 'Replay' card to get through the door, but you can also disable the system so that nobody else can come in behind you.
What's more, making a Gecko is easy and cheap. Franken says the hardware costs about $10.
According to Franken, the hack subverts the Wiegand protocol, commonly used for communication between the card reader and the back-end access control system, and doesn't take direct advantage of any problems with any of the hardware involved. When you swipe your card at the office, the reader very likely sends a signal using the Wiegand protocol to the control system, when then opens the doors.
"The problem is, this is what we call a plain-text protocol," Franken says. "There's nothing secure about it."
For many card readers, getting Gecko in place is just a matter of popping off the reader's cover with a knife or screwdriver and undoing two screws, he says. That provides access to the wires that carry the signal from the reader to the control system.
In a real-world situation you'd quickly cut the wires and insert one cut end into one side of the Gecko, and the other cut end into the Gecko's other side. In Franken's demonstration he used pre-made connectors so he could easily disconnect and reconnect the device. When you put the reader's cover back, the Gecko would be hidden behind it.
The card reader also continues to work fine with the Gecko attached. It passes along the signal from the reader to the control system as it's supposed to. But when someone swipes an authorized card that unlocks the door, Gecko saves that signal.
With that saved unlock signal, the attacker can swipe a 'replay' card that tells Gecko to re-send that saved signal, and the doors unlock. What's more, any saved access logs would only show that the same person who originally swiped the saved signal swiped his card again.
Edit By: Robert McMillan
August 06, 2007 - Cut a couple of wires, insert a small, easy-to-make device between them, and you can walk right through all those supposedly card-protected locked office doors.
At the Defcon security conference over the weekend, a hacker and Defcon staffer who goes by the name Zac Franken showed off how a small homemade device he calls Gecko can perform a classic man-in-the-middle attack on the type of access card readers used on office doors around the country. Gecko is simply a small, programmable PIC chip with a wire connector on either side. Once it's connected to the wires behind the card reader, it's not only trivial to use a 'Replay' card to get through the door, but you can also disable the system so that nobody else can come in behind you.
What's more, making a Gecko is easy and cheap. Franken says the hardware costs about $10.
According to Franken, the hack subverts the Wiegand protocol, commonly used for communication between the card reader and the back-end access control system, and doesn't take direct advantage of any problems with any of the hardware involved. When you swipe your card at the office, the reader very likely sends a signal using the Wiegand protocol to the control system, when then opens the doors.
"The problem is, this is what we call a plain-text protocol," Franken says. "There's nothing secure about it."
For many card readers, getting Gecko in place is just a matter of popping off the reader's cover with a knife or screwdriver and undoing two screws, he says. That provides access to the wires that carry the signal from the reader to the control system.
In a real-world situation you'd quickly cut the wires and insert one cut end into one side of the Gecko, and the other cut end into the Gecko's other side. In Franken's demonstration he used pre-made connectors so he could easily disconnect and reconnect the device. When you put the reader's cover back, the Gecko would be hidden behind it.
The card reader also continues to work fine with the Gecko attached. It passes along the signal from the reader to the control system as it's supposed to. But when someone swipes an authorized card that unlocks the door, Gecko saves that signal.
With that saved unlock signal, the attacker can swipe a 'replay' card that tells Gecko to re-send that saved signal, and the doors unlock. What's more, any saved access logs would only show that the same person who originally swiped the saved signal swiped his card again.
August 06, 2007 (PC World) -- Cut a couple of wires, insert a small, easy-to-make device between them, and you can walk right through all those supposedly card-protected locked office doors.
At the Defcon security conference over the weekend, a hacker and Defcon staffer who goes by the name Zac Franken showed off how a small homemade device he calls Gecko can perform a classic man-in-the-middle attack on the type of access card readers used on office doors around the country. Gecko is simply a small, programmable PIC chip with a wire connector on either side. Once it's connected to the wires behind the card reader, it's not only trivial to use a 'Replay' card to get through the door, but you can also disable the system so that nobody else can come in behind you.
What's more, making a Gecko is easy and cheap. Franken says the hardware costs about $10.
According to Franken, the hack subverts the Wiegand protocol, commonly used for communication between the card reader and the back-end access control system, and doesn't take direct advantage of any problems with any of the hardware involved. When you swipe your card at the office, the reader very likely sends a signal using the Wiegand protocol to the control system, when then opens the doors.
"The problem is, this is what we call a plain-text protocol," Franken says. "There's nothing secure about it."
For many card readers, getting Gecko in place is just a matter of popping off the reader's cover with a knife or screwdriver and undoing two screws, he says. That provides access to the wires that carry the signal from the reader to the control system.
In a real-world situation you'd quickly cut the wires and insert one cut end into one side of the Gecko, and the other cut end into the Gecko's other side. In Franken's demonstration he used pre-made connectors so he could easily disconnect and reconnect the device. When you put the reader's cover back, the Gecko would be hidden behind it.
The card reader also continues to work fine with the Gecko attached. It passes along the signal from the reader to the control system as it's supposed to. But when someone swipes an authorized card that unlocks the door, Gecko saves that signal.
With that saved unlock signal, the attacker can swipe a 'replay' card that tells Gecko to re-send that saved signal, and the doors unlock. What's more, any saved access logs would only show that the same person who originally swiped the saved signal swiped his card again.
Edit By: Robert McMillan
[Network Security]Black Hat 2007 sees Web 2.0 repeating Web 1.0 mistakes
LAS VEGAS--This year's Black Hat was pretty much summed up in a prescient keynote by Richard Clarke, the nation's former cyber security czar who is now a novelist and chairman of Good Harbor Consulting. Clarke said "we're building more and more of our economy on cyberspace 1.0, yet we have secured very little of cyberspace 1.0." The apparent speed gained in Ajax (Asynchronous JavaScript and XML), which is technology that divides processing tasks between the Web server (Web site) and the Web client (browser), has opened Web 2.0 to some old-school attacks.
Nothing more clearly demonstrated this than a live hijack of a Gmail account. In a talk originally to have been presented alongside his colleague David Maynor, Errata Security CEO Robert Graham demonstrated for a standing-room-only crowd how he was able to use a tool called Hamster and Ferret to sniff the wireless airwaves for the URLs of Web 2.0 sites. While talking about another matter entirely, Graham ran the tools in the background, sniffing the wireless packets in the conference room, looking for Web 2.0 sessions cookies used by those in the audience for his talk (if, as a speaker, you ever wanted to thwart those who would be checking e-mail during your presentation, this is the tool to use). Grabbing cookies is not new. What is new is that Graham was able to grab these Web 2.0 clear text session cookies out of the thin air and then plunk the captured URL into a new browser. No password is needed; the cookie itself is enough. Toward the end, Graham opened his Hamster tool and found several likely candidates. He chose one Gmail account that had been opened during his talk. The presentation screen lit up with some poor guy's active Gmail account briefly displayed. Everyone applauded before Graham quickly wiped the information from the screen.
Should you avoid Gmail? No. If you simply change the URL in your Gmail bookmark (or any other Google-related bookmark) from http:// to https://, the Errata Security hack is no longer valid. That's not true, however, for Facebook, Hotmail, and several other Web 2.0 accounts. Graham says that while traditional Web 1.0 sites long ago learned to terminate session cookies, the cookies used on Web 2.0 sites don't expire for several years, so you could sniff accounts out of the air at your local Starbucks and months later still have access to that person's account. That's what's really scary about this new kind of man-in-the-middle attack: the victim has no idea that this is happening, and even changing the account password will have no effect. While you as an attacker can send messages, read existing messages, and even alter the look and feel of the Web mail service itself, you can't, however, lock the owner out of the account.
In a separate talk, Billy Hoffman and Brian Sullivan, both of SPI Dynamics, talked about the rush to Web 2.0, how even some established sites are "Ajaxify-ing" themselves at the expense of good security practices. To prove their point, the pair built an Ajax-enabled travel Web site, HackerTravel.com. They did so by following the current best practices for Ajax. In their talk, however, Sullivan and Hoffman showed how they could take advantage of known weaknesses within Ajax. For example, they could rearrange the JavaScript on the client to either book every seat on the plane (staging a denial-of-service attack) or purchase a round-trip ticket for $1.
Last year, Hoffman talked about the many problems within Web 2.0 Ajax technology, and this year he more or less put the subject to bed by addressing developers and insisting that they not put business logic on the client side of the transaction; that they keep all of that on the Web server. You can hear more about this topic from Hoffman and Sullivan on a recent Security Bites podcast.
Later in the conference Billy Hoffman returned with John Terrill, executive vice president and co-founder of Enterprise Management Technology, to talk about a prototype Web 2.0 worm they've built written in JavaScript and Perl. Hoffman explained that if there's a cross-site scripting vulnerability on a Web site, the worm can inject itself into that Web site in JavaScript form. Inside the worm is a Perl form so that when a user visits that Web site, the JavaScript version gets downloaded to their Web browser and the Perl form can inject itself into the Web server, so it can move from client to server with ease.
While we've seen computer worms before, they claim their new creation can pull vulnerability data off security sites such as Secunia and then exploit those new vulnerabilities, rendering current desktop security protection ineffective. Currently such a worm does not exist in the wild, but Terrill and Hoffman insist it's possible for others to do what they've done. You can hear Hoffman talk more about his creation in this recent Security Bites podcast.
There is hope. In addition to better coding practices on the Web server, another way to prevent runaway Web 2.0 vulnerabilities is to lock down the JavaScript in the client's browser. At Black Hat, Mozilla released new tools allowing anyone to test their Firefox (or any browser) against JavaScript errors. What's significant is that you can also use this tool against Apple Safari, Microsoft Internet Explorer, and Opera.
In an interview before her presentation, Window Snyder told me there are about 10,000 Firefox users worldwide who regularly download what are called nightly builds. Whenever the Mozilla security team puts out new fixes within the nightly builds, it's these 10,000 users who test the fixes on a wide variety of machines and under a wide variety of circumstances. Thus, Mozilla is able to roll out its security patches faster and with fewer headaches than its competitors. By tapping into their millions of users worldwide, Mozilla hopes more of these avid users will identify future Firefox flaws before they can be exploited.
Edit By: Robert Vamosi
Nothing more clearly demonstrated this than a live hijack of a Gmail account. In a talk originally to have been presented alongside his colleague David Maynor, Errata Security CEO Robert Graham demonstrated for a standing-room-only crowd how he was able to use a tool called Hamster and Ferret to sniff the wireless airwaves for the URLs of Web 2.0 sites. While talking about another matter entirely, Graham ran the tools in the background, sniffing the wireless packets in the conference room, looking for Web 2.0 sessions cookies used by those in the audience for his talk (if, as a speaker, you ever wanted to thwart those who would be checking e-mail during your presentation, this is the tool to use). Grabbing cookies is not new. What is new is that Graham was able to grab these Web 2.0 clear text session cookies out of the thin air and then plunk the captured URL into a new browser. No password is needed; the cookie itself is enough. Toward the end, Graham opened his Hamster tool and found several likely candidates. He chose one Gmail account that had been opened during his talk. The presentation screen lit up with some poor guy's active Gmail account briefly displayed. Everyone applauded before Graham quickly wiped the information from the screen.
Should you avoid Gmail? No. If you simply change the URL in your Gmail bookmark (or any other Google-related bookmark) from http:// to https://, the Errata Security hack is no longer valid. That's not true, however, for Facebook, Hotmail, and several other Web 2.0 accounts. Graham says that while traditional Web 1.0 sites long ago learned to terminate session cookies, the cookies used on Web 2.0 sites don't expire for several years, so you could sniff accounts out of the air at your local Starbucks and months later still have access to that person's account. That's what's really scary about this new kind of man-in-the-middle attack: the victim has no idea that this is happening, and even changing the account password will have no effect. While you as an attacker can send messages, read existing messages, and even alter the look and feel of the Web mail service itself, you can't, however, lock the owner out of the account.
In a separate talk, Billy Hoffman and Brian Sullivan, both of SPI Dynamics, talked about the rush to Web 2.0, how even some established sites are "Ajaxify-ing" themselves at the expense of good security practices. To prove their point, the pair built an Ajax-enabled travel Web site, HackerTravel.com. They did so by following the current best practices for Ajax. In their talk, however, Sullivan and Hoffman showed how they could take advantage of known weaknesses within Ajax. For example, they could rearrange the JavaScript on the client to either book every seat on the plane (staging a denial-of-service attack) or purchase a round-trip ticket for $1.
Last year, Hoffman talked about the many problems within Web 2.0 Ajax technology, and this year he more or less put the subject to bed by addressing developers and insisting that they not put business logic on the client side of the transaction; that they keep all of that on the Web server. You can hear more about this topic from Hoffman and Sullivan on a recent Security Bites podcast.
Later in the conference Billy Hoffman returned with John Terrill, executive vice president and co-founder of Enterprise Management Technology, to talk about a prototype Web 2.0 worm they've built written in JavaScript and Perl. Hoffman explained that if there's a cross-site scripting vulnerability on a Web site, the worm can inject itself into that Web site in JavaScript form. Inside the worm is a Perl form so that when a user visits that Web site, the JavaScript version gets downloaded to their Web browser and the Perl form can inject itself into the Web server, so it can move from client to server with ease.
While we've seen computer worms before, they claim their new creation can pull vulnerability data off security sites such as Secunia and then exploit those new vulnerabilities, rendering current desktop security protection ineffective. Currently such a worm does not exist in the wild, but Terrill and Hoffman insist it's possible for others to do what they've done. You can hear Hoffman talk more about his creation in this recent Security Bites podcast.
There is hope. In addition to better coding practices on the Web server, another way to prevent runaway Web 2.0 vulnerabilities is to lock down the JavaScript in the client's browser. At Black Hat, Mozilla released new tools allowing anyone to test their Firefox (or any browser) against JavaScript errors. What's significant is that you can also use this tool against Apple Safari, Microsoft Internet Explorer, and Opera.
In an interview before her presentation, Window Snyder told me there are about 10,000 Firefox users worldwide who regularly download what are called nightly builds. Whenever the Mozilla security team puts out new fixes within the nightly builds, it's these 10,000 users who test the fixes on a wide variety of machines and under a wide variety of circumstances. Thus, Mozilla is able to roll out its security patches faster and with fewer headaches than its competitors. By tapping into their millions of users worldwide, Mozilla hopes more of these avid users will identify future Firefox flaws before they can be exploited.
Edit By: Robert Vamosi
8/04/2007
[OS Related]LinuxWorld: Open-source software being treated same as any other
August 03, 2007 - As more than 11,000 attendees prepare to converge on San Francisco for the LinuxWorld Conference & Expo next week, one industry analyst says customers are evaluating open-source software the same way they evaluate proprietary software: It has to be priced right and work well.
Enterprises are judging open source on its upfront cost, total cost of ownership, reliability and features, just as they would a commercial product, said Matt Lawton, an analyst at IDC. Criteria unique to open source, such as issues of potential liability for patent infringement and the level of technical support, are way down the list of worries, he said.
"Software is software, and things like functionality and reliability are the most important attributes, regardless of whether the software is open source or not," Lawton said. "But having said that, to the extent that open source can save end users money, then they are all ears."
If open source is increasingly being considered on par with proprietary software, that opens more opportunities for it in the enterprise market for use in servers, desktop computers and mobile devices.
Worldwide revenue for open-source software, which reached $1.8 billion in 2006, is expected to grow at a compound annual growth rate of 26%, reaching $5.8 billion by 2011, IDC research shows.
Attendance at LinuxWorld, scheduled for Monday through Thursday, is expected to be higher than last year's 10,000 because LinuxWorld is running concurrently with the first-ever Next-Generation Data Center (NGDC) conference. The latter is devoted to energy efficiency and the use of virtualization. Both conferences are produced by IDG World Expo, which, like IDG News Service and Computerworld, is owned by International Data Group.
For its part, virtualization is also becoming more widely accepted and deployed in enterprises. Virtualization is software that divides one server into multiple logical servers, enabling IT professionals to run multiple software applications efficiently on one machine, increasing utilization rates.
A few years ago, companies used virtualization primarily in software test and development environments. But as they become more confident, companies are using it in production environments, said Andreas Antonopoulos of Nemertes Research. Virtualization can be an energy-saver because if server utilization increases to 70% or 80% from the average 20%, fewer power-hungry servers will be needed.
Antonopoulos calls virtualization the "sardine can" strategy.
The idea is to cram as many applications into one server as possible, like squeezing the tiny fish into a container. "That strategy has allowed IT departments to use more software applications ... for the same amount of capital budget," he said.
But data center operators may be reaching a point of diminishing returns using virtualization for server consolidation, Antonopoulos said. Companies that have used virtualization for consolidation for some time may have already reduced the number of servers to as few as they need. For them, other business drivers for virtualization include easier software development, disaster recovery, business agility and operational efficiency.
LinuxWorld/NGDC will feature keynote addresses from several tech executives, including Ann Livermore, executive vice president of Hewlett-Packard Co.; Diane Green, president of VMware Inc., Kevin Kettler, chief technology officer of Dell Inc., and others.
Oracle Corp. is expected to reveal more about the results of its efforts to undercut Red Hat Inc. on the cost of support for Red Hat Enterprise Linux. Oracle identified a number of companies a few months ago that it claimed switched to Oracle from Red Hat for support, but Red Hat disputed those claims.
Also expected is more information on Microsoft Corp.'s deals with Novell Inc. and a few other Linux distributors, in which Microsoft says it is working to improve the interoperability of Linux with Windows. Microsoft also caused a stir in the open-source community when it warned in May that Linux violates 235 of its patents, although many in the open-source community dismissed the claim as sabre-rattling.
This is also the first LinuxWorld since the adoption of the revised General Public License for use of open-source software, known as GPL v3.
Edit By: Robert Mullins
Enterprises are judging open source on its upfront cost, total cost of ownership, reliability and features, just as they would a commercial product, said Matt Lawton, an analyst at IDC. Criteria unique to open source, such as issues of potential liability for patent infringement and the level of technical support, are way down the list of worries, he said.
"Software is software, and things like functionality and reliability are the most important attributes, regardless of whether the software is open source or not," Lawton said. "But having said that, to the extent that open source can save end users money, then they are all ears."
If open source is increasingly being considered on par with proprietary software, that opens more opportunities for it in the enterprise market for use in servers, desktop computers and mobile devices.
Worldwide revenue for open-source software, which reached $1.8 billion in 2006, is expected to grow at a compound annual growth rate of 26%, reaching $5.8 billion by 2011, IDC research shows.
Attendance at LinuxWorld, scheduled for Monday through Thursday, is expected to be higher than last year's 10,000 because LinuxWorld is running concurrently with the first-ever Next-Generation Data Center (NGDC) conference. The latter is devoted to energy efficiency and the use of virtualization. Both conferences are produced by IDG World Expo, which, like IDG News Service and Computerworld, is owned by International Data Group.
For its part, virtualization is also becoming more widely accepted and deployed in enterprises. Virtualization is software that divides one server into multiple logical servers, enabling IT professionals to run multiple software applications efficiently on one machine, increasing utilization rates.
A few years ago, companies used virtualization primarily in software test and development environments. But as they become more confident, companies are using it in production environments, said Andreas Antonopoulos of Nemertes Research. Virtualization can be an energy-saver because if server utilization increases to 70% or 80% from the average 20%, fewer power-hungry servers will be needed.
Antonopoulos calls virtualization the "sardine can" strategy.
The idea is to cram as many applications into one server as possible, like squeezing the tiny fish into a container. "That strategy has allowed IT departments to use more software applications ... for the same amount of capital budget," he said.
But data center operators may be reaching a point of diminishing returns using virtualization for server consolidation, Antonopoulos said. Companies that have used virtualization for consolidation for some time may have already reduced the number of servers to as few as they need. For them, other business drivers for virtualization include easier software development, disaster recovery, business agility and operational efficiency.
LinuxWorld/NGDC will feature keynote addresses from several tech executives, including Ann Livermore, executive vice president of Hewlett-Packard Co.; Diane Green, president of VMware Inc., Kevin Kettler, chief technology officer of Dell Inc., and others.
Oracle Corp. is expected to reveal more about the results of its efforts to undercut Red Hat Inc. on the cost of support for Red Hat Enterprise Linux. Oracle identified a number of companies a few months ago that it claimed switched to Oracle from Red Hat for support, but Red Hat disputed those claims.
Also expected is more information on Microsoft Corp.'s deals with Novell Inc. and a few other Linux distributors, in which Microsoft says it is working to improve the interoperability of Linux with Windows. Microsoft also caused a stir in the open-source community when it warned in May that Linux violates 235 of its patents, although many in the open-source community dismissed the claim as sabre-rattling.
This is also the first LinuxWorld since the adoption of the revised General Public License for use of open-source software, known as GPL v3.
Edit By: Robert Mullins
[PC News]Microsoft delays Mac Office 2008
Don't look for it until January
August 02, 2007 - Microsoft Corp. today pushed back the release date of Office 2008 for Mac until January, a delay from an earlier promise to deliver the new suite this year.
"It was clear from our June and July quality checkpoints that no matter how hard we tried, we couldn't release our product in time for the Christmas season with the kind of quality we wanted," said Craig Eisler, the development group's general manager, on the team's blog.
The new mid-January debut to retail would correspond with Macworld Expo, Apple's big conference and trade show, which is slated for Jan. 14-18 in San Francisco.
Previously scheduled to ship in the second half of this year -- Microsoft has never been more specific than that six-month window -- Office 2008 will move into release to manufacturing status in December, launch at Macworld and be available to volume license customers in the first quarter of 2008, said Eisler. Microsoft had given no indication that the original 2007 ship date for the suite might slip, although the first Open XML file converters for Office 2004 and Office v. X made a late appearance in May, a month or more past an earlier deadline.
Eisler, who just came on board Microsoft's Mac business unit in June, did not give an explanation for the schedule slip. Elsewhere, however, he was quoted as saying Apple's 2006 switch to Intel processors and the ensuing need to move to different development tools, as well as the ongoing struggle with the new Open XML file formats, played parts.
Some Mac users didn't take kindly to the announcement. One, dubbed "p0intblank" on MacRumors's message forums, offered a tart observation: "Microsoft delaying a product? I wish I could've seen this coming." Others on the same thread said this was the last straw and that they wouldn't be coming back to Microsoft. Instead, they would turn to, or continue using, the open-source OpenOffice and NeoOffice suites, or Google Docs.
Other development will be shoved aside for now, Eisler added, to focus on Office 2008. "We're in an 'all hands on deck' mode right now to ensure Office 2008 gets finished on time, and so you will not see final versions of our RDC client or file format converters until sometime after we ship Office." A beta of Remote Desktop Connection Client -- a program that lets Mac users connect to Windows systems to access files and run applications on those PCs -- was released earlier this week; an update to the Office 2007 file converter popped up Tuesday.
An invite-only Office 2008 for Mac beta test is currently under way, but Microsoft has no intention of opening that to the general public. Instead, said Eisler, his group will offer what he called "sneak peaks" to users, although he didn't spell out exactly what that might entail.
Prices for Office 2008 have not been set. Current prices for Office 2004 for Mac range from $149 to $499.
Edit By: Gregg Keizer
August 02, 2007 - Microsoft Corp. today pushed back the release date of Office 2008 for Mac until January, a delay from an earlier promise to deliver the new suite this year.
"It was clear from our June and July quality checkpoints that no matter how hard we tried, we couldn't release our product in time for the Christmas season with the kind of quality we wanted," said Craig Eisler, the development group's general manager, on the team's blog.
The new mid-January debut to retail would correspond with Macworld Expo, Apple's big conference and trade show, which is slated for Jan. 14-18 in San Francisco.
Previously scheduled to ship in the second half of this year -- Microsoft has never been more specific than that six-month window -- Office 2008 will move into release to manufacturing status in December, launch at Macworld and be available to volume license customers in the first quarter of 2008, said Eisler. Microsoft had given no indication that the original 2007 ship date for the suite might slip, although the first Open XML file converters for Office 2004 and Office v. X made a late appearance in May, a month or more past an earlier deadline.
Eisler, who just came on board Microsoft's Mac business unit in June, did not give an explanation for the schedule slip. Elsewhere, however, he was quoted as saying Apple's 2006 switch to Intel processors and the ensuing need to move to different development tools, as well as the ongoing struggle with the new Open XML file formats, played parts.
Some Mac users didn't take kindly to the announcement. One, dubbed "p0intblank" on MacRumors's message forums, offered a tart observation: "Microsoft delaying a product? I wish I could've seen this coming." Others on the same thread said this was the last straw and that they wouldn't be coming back to Microsoft. Instead, they would turn to, or continue using, the open-source OpenOffice and NeoOffice suites, or Google Docs.
Other development will be shoved aside for now, Eisler added, to focus on Office 2008. "We're in an 'all hands on deck' mode right now to ensure Office 2008 gets finished on time, and so you will not see final versions of our RDC client or file format converters until sometime after we ship Office." A beta of Remote Desktop Connection Client -- a program that lets Mac users connect to Windows systems to access files and run applications on those PCs -- was released earlier this week; an update to the Office 2007 file converter popped up Tuesday.
An invite-only Office 2008 for Mac beta test is currently under way, but Microsoft has no intention of opening that to the general public. Instead, said Eisler, his group will offer what he called "sneak peaks" to users, although he didn't spell out exactly what that might entail.
Prices for Office 2008 have not been set. Current prices for Office 2004 for Mac range from $149 to $499.
Edit By: Gregg Keizer
[Virus Security]Russian malware storm brewing?
Trend Micro spots hack-packed server in Russia
August 02, 2007 - Security researchers at Trend Micro Inc. have spotted a Russian server loaded with more than 400 different pieces of malware that may be poised to launch a large-scale attack through malicious Web sites hosted in Italy.
Chenghuai Lu, a senior threat analyst at the Tokyo-based antivirus vendor, recently uncovered a site with several hundred malicious programs and traced the site's server to a Russian IP address. Among the harbored malware were examples of three Trojan families: Dropper.cko, Clicker.qu and Polycrypt.g. All three clans typically hijack Internet Explorer on compromised PCs and direct users to adult Web sites.
Meanwhile, another Trend Micro researcher, senior software engineer Feike Hacquebord, discovered a large number of Italian-language Web sites that at first glance appeared to be compromised with malicious IFRAMEs, inserts in the HTML coding of a page, often JavaScript, that can hijack a PC whose browser visits the site. On second look, however, the Italian-style sites do not appear to have been hacked but instead were created with the IFRAMEs in mind. According to Trend Micro, the IFRAMES point to the malware-packed Russian site found by Lu.
"Looking at these massive samples of malware, we can't help to think that there's something brewing in Russia," said Carolyn Guevarra, a third researcher at Trend Micro, on the team's blog yesterday. "We have just seen these cybercriminals pull the 'Italian Job' recently," she added. "Are we now seeing a 'Russian Uprising' coming our way?"
Guevarra's Italian comment refers to a large-scale attack about six weeks ago that involved more than 10,000 hacked sites hosted in that country. Those attacks were guided by Mpack, a multistrike exploit tool kit that hackers had deployed on one or more servers; the compromised sites secretly directed users to an Mpack-equipped server, which then tried a number of exploits on the PC.
Trend Micro has blocked the malicious Web sites for its customers and is working to develop more information on the possible attack plot. "More details soon," Guevarra promised.
Edit By: Gregg Keizer
August 02, 2007 - Security researchers at Trend Micro Inc. have spotted a Russian server loaded with more than 400 different pieces of malware that may be poised to launch a large-scale attack through malicious Web sites hosted in Italy.
Chenghuai Lu, a senior threat analyst at the Tokyo-based antivirus vendor, recently uncovered a site with several hundred malicious programs and traced the site's server to a Russian IP address. Among the harbored malware were examples of three Trojan families: Dropper.cko, Clicker.qu and Polycrypt.g. All three clans typically hijack Internet Explorer on compromised PCs and direct users to adult Web sites.
Meanwhile, another Trend Micro researcher, senior software engineer Feike Hacquebord, discovered a large number of Italian-language Web sites that at first glance appeared to be compromised with malicious IFRAMEs, inserts in the HTML coding of a page, often JavaScript, that can hijack a PC whose browser visits the site. On second look, however, the Italian-style sites do not appear to have been hacked but instead were created with the IFRAMEs in mind. According to Trend Micro, the IFRAMES point to the malware-packed Russian site found by Lu.
"Looking at these massive samples of malware, we can't help to think that there's something brewing in Russia," said Carolyn Guevarra, a third researcher at Trend Micro, on the team's blog yesterday. "We have just seen these cybercriminals pull the 'Italian Job' recently," she added. "Are we now seeing a 'Russian Uprising' coming our way?"
Guevarra's Italian comment refers to a large-scale attack about six weeks ago that involved more than 10,000 hacked sites hosted in that country. Those attacks were guided by Mpack, a multistrike exploit tool kit that hackers had deployed on one or more servers; the compromised sites secretly directed users to an Mpack-equipped server, which then tried a number of exploits on the PC.
Trend Micro has blocked the malicious Web sites for its customers and is working to develop more information on the possible attack plot. "More details soon," Guevarra promised.
Edit By: Gregg Keizer
[Network News]Web 2.0: Big app on campus
When I was in college, there was this one classmate everyone found especially annoying. Quite the little joiner, she would post opinions of dubious intellectual worth on the class message board just to show that she did the reading and to puff herself up with some tangentially related story.
Gen X cynicism has given way to millennial self-absorption as a new generation's lust for celebrity spreads to college classrooms, say educators. Now, universities are hoping to tap into that urge with new technologies to recruit prospective students and entice current students to stretch their intellect.
"A lot of students...like showing off their work. They like being published. They like being on display," said Barbara Knauff, senior instructional technologist at Dartmouth College.
Other educators, echoing Knauff's comments, see the enticement of notoriety through Web 2.0-style social tools--blogs, wikis and the like--as a way to engage students in their education and maybe even get them to choose one school over another.
Seton Hall University uses social tools as a way to hook students even before they have officially started. A log-in is mailed to new students along with the acceptance materials, according to Jan Day, senior director of client engagement at Blackboard, an educational software company that worked with the university to implement the site.
"One of the pain points in higher education is that they said "no" to a whole bunch of people and are counting on kids to accept. They know they need X percent," said Day.
A social-networking environment gets students comfortable with a school well before freshman orientation, said Day. Prospective students can e-mail roommates, make friends and find out the best campus hangouts even before they accept admission.
Some universities use video downloads to introduce professors.
Apple's iTunes U--though met with skepticism among professors wary of freely distributing their valuable content--is a useful public relations tool, according to Rhonda Blackburn, assistant director at Texas A&M University. Professors have used it to post videos introducing themselves, their research and their classes.
Once students get to universities, the tools continue. Classes in which content is pushed out one way to students are becoming passe. Instead, instructors are beginning to distribute lecture content to encourage intellectual debate and research online--away from the classroom--and are using class time for more in-depth discussion.
Knauff said self-publishing tools are an enticing way to get college students to develop original thoughts as opposed to simply repeating what they think professors want to hear. Students are collectively creating glossaries and repositories for academic articles, audio files and videos.
"They write for their peers as well and it creates a different motivation. They want to do well, don't want to look phony and get excited about the projects with the media aspect," said Knauff.
The multimedia or personal stuff that professors may think of as flashy filler is getting students to make an emotional investment in their education. "Sure, the content they offer is not as good as if a faculty member produced it. The content expert is always going to be better at creating the content, but that's not the point," said Knauff.
And it goes beyond blogs replacing reading journals for undergrad American lit classes. Dartmouth's medical school students use wikis to author, share and critique case studies.
Michael Barrett, a doctor and clinical associate professor at the Temple University School of Medicine, found that listening to heartbeat audio files drastically improved stethoscope skills.
In a study Barrett presented in March to the American College of Cardiology, 149 doctors correctly identified heartbeats 80 percent of the time, compared with the usual 40 percent. Barrett initially distributed his files on CD, until his students suggested he make the files available to iPods.
Kenneth Hartman, academic director of Drexel University's eLearning program and teacher of graduate education classes, uses TVEyes.com, a search site for major TV and radio content, to automatically feed students relevant video content.
Some Texas A&M professors use Camtasia, a software program that enables users to create videos of screen captures with voice-overs and an aid for figuring out complicated math problems.
"The theory is that lecturing is not an effective way for everyone to learn, but if you make a student create, they learn an incredible amount. That's the whole idea with changing this paradigm," said Knauff.
Old school meets new school
Some see the advent of Web 2.0-style tools in the classroom heralding a shift in everything from education theory to how schools are built. The bottom line: traditional lecturing may be on its way out, said Claire Schooley, an analyst at Forrester Research who follows learning trends at universities and corporations.
"That interaction between student and professor is going to become more prominent where you have already read about or watched the lecture online. The days of the large university with a 300-person lecture hall are over," said Schooley. "Universities will be built very differently, with the concentration on workshop life."
New tools could also help keep students honest. Some tools require log-ins that can also provide a way of tracking participation in group projects, according to Hartman.
"Every term I would get someone coming up and saying 'Dr. Hartman, here's the paper from the five of us, but I did most of the work.' Short of rolling out the Spanish Inquisition, there's not much you can do about it at that point," said Hartman. "With wikis, I can see who pulled the load and who didn't do anything."
User-friendly multimedia communication servers are also being used for more efficient uploading and distribution of educational multimedia to specific people without the need for IT help, according to Blackburn.
With permission from copyright holders, professors are posting things like films and language lessons to university servers. They can be accessed in streaming format by a specific set of students as designated by the professor. The files are automatically deleted from the server at the end of the semester, said Blackburn.
While undergrads do still have to get up in front of the class for Texas A&M's required public speaking class, technology has made the process a little less traumatic. Instead of critiquing a student in front of the class, a video of his or her speech, accessible only to the speaker and his or her professor, is uploaded to a server. The student then watches the video and submits a self-critique, while the professor sends a private critique to the student.
Universities are not just limiting tools to professors and classrooms. Students are given server space to develop Web sites, RSS feeds, blogs, podcasts, videos, discussion boards and e-mail groups for clubs, groups and political campaigns.
And then there's Second Life. In the spring semester of 2007, Texas A&M's department of recreation, park and tourism sciences started using the virtual world to run scenarios of park ranger exercises.
Second Life is being evaluated by several instructors, 1,800 of whom met at an in-world conference in May to discuss educational best practices.
The popular virtual world is of particular interest to universities making substantial revenue from online degrees.
Walden University faculty member Kevin Jarrett, who teaches an online master's course in education, won a $10,000 grant to spend six months researching Second Life's educational potential.
"It's one thing to look at a discussion board, wikis and blogs. It's something else completely different to physically act in a 3D environment with others in your class. There is increased engagement and feelings of identity," said Jarrett.
Hartman, a member of Drexel's Second Life committee, says his school's presence is a marketing tool right now, but that in-world classes are probably only three years away.
"Just like with hybrids and the car industry a few years ago, I need to start building that car because if I wait three years, I'll miss that curve," Hartman said. "I'm building it now as a prototype, but I don't expect to take it out and race it."
Edit By: Candace Lombardi
Gen X cynicism has given way to millennial self-absorption as a new generation's lust for celebrity spreads to college classrooms, say educators. Now, universities are hoping to tap into that urge with new technologies to recruit prospective students and entice current students to stretch their intellect.
"A lot of students...like showing off their work. They like being published. They like being on display," said Barbara Knauff, senior instructional technologist at Dartmouth College.
Other educators, echoing Knauff's comments, see the enticement of notoriety through Web 2.0-style social tools--blogs, wikis and the like--as a way to engage students in their education and maybe even get them to choose one school over another.
Seton Hall University uses social tools as a way to hook students even before they have officially started. A log-in is mailed to new students along with the acceptance materials, according to Jan Day, senior director of client engagement at Blackboard, an educational software company that worked with the university to implement the site.
"One of the pain points in higher education is that they said "no" to a whole bunch of people and are counting on kids to accept. They know they need X percent," said Day.
A social-networking environment gets students comfortable with a school well before freshman orientation, said Day. Prospective students can e-mail roommates, make friends and find out the best campus hangouts even before they accept admission.
Some universities use video downloads to introduce professors.
Apple's iTunes U--though met with skepticism among professors wary of freely distributing their valuable content--is a useful public relations tool, according to Rhonda Blackburn, assistant director at Texas A&M University. Professors have used it to post videos introducing themselves, their research and their classes.
Once students get to universities, the tools continue. Classes in which content is pushed out one way to students are becoming passe. Instead, instructors are beginning to distribute lecture content to encourage intellectual debate and research online--away from the classroom--and are using class time for more in-depth discussion.
Knauff said self-publishing tools are an enticing way to get college students to develop original thoughts as opposed to simply repeating what they think professors want to hear. Students are collectively creating glossaries and repositories for academic articles, audio files and videos.
"They write for their peers as well and it creates a different motivation. They want to do well, don't want to look phony and get excited about the projects with the media aspect," said Knauff.
The multimedia or personal stuff that professors may think of as flashy filler is getting students to make an emotional investment in their education. "Sure, the content they offer is not as good as if a faculty member produced it. The content expert is always going to be better at creating the content, but that's not the point," said Knauff.
And it goes beyond blogs replacing reading journals for undergrad American lit classes. Dartmouth's medical school students use wikis to author, share and critique case studies.
Michael Barrett, a doctor and clinical associate professor at the Temple University School of Medicine, found that listening to heartbeat audio files drastically improved stethoscope skills.
In a study Barrett presented in March to the American College of Cardiology, 149 doctors correctly identified heartbeats 80 percent of the time, compared with the usual 40 percent. Barrett initially distributed his files on CD, until his students suggested he make the files available to iPods.
Kenneth Hartman, academic director of Drexel University's eLearning program and teacher of graduate education classes, uses TVEyes.com, a search site for major TV and radio content, to automatically feed students relevant video content.
Some Texas A&M professors use Camtasia, a software program that enables users to create videos of screen captures with voice-overs and an aid for figuring out complicated math problems.
"The theory is that lecturing is not an effective way for everyone to learn, but if you make a student create, they learn an incredible amount. That's the whole idea with changing this paradigm," said Knauff.
Old school meets new school
Some see the advent of Web 2.0-style tools in the classroom heralding a shift in everything from education theory to how schools are built. The bottom line: traditional lecturing may be on its way out, said Claire Schooley, an analyst at Forrester Research who follows learning trends at universities and corporations.
"That interaction between student and professor is going to become more prominent where you have already read about or watched the lecture online. The days of the large university with a 300-person lecture hall are over," said Schooley. "Universities will be built very differently, with the concentration on workshop life."
New tools could also help keep students honest. Some tools require log-ins that can also provide a way of tracking participation in group projects, according to Hartman.
"Every term I would get someone coming up and saying 'Dr. Hartman, here's the paper from the five of us, but I did most of the work.' Short of rolling out the Spanish Inquisition, there's not much you can do about it at that point," said Hartman. "With wikis, I can see who pulled the load and who didn't do anything."
User-friendly multimedia communication servers are also being used for more efficient uploading and distribution of educational multimedia to specific people without the need for IT help, according to Blackburn.
With permission from copyright holders, professors are posting things like films and language lessons to university servers. They can be accessed in streaming format by a specific set of students as designated by the professor. The files are automatically deleted from the server at the end of the semester, said Blackburn.
While undergrads do still have to get up in front of the class for Texas A&M's required public speaking class, technology has made the process a little less traumatic. Instead of critiquing a student in front of the class, a video of his or her speech, accessible only to the speaker and his or her professor, is uploaded to a server. The student then watches the video and submits a self-critique, while the professor sends a private critique to the student.
Universities are not just limiting tools to professors and classrooms. Students are given server space to develop Web sites, RSS feeds, blogs, podcasts, videos, discussion boards and e-mail groups for clubs, groups and political campaigns.
And then there's Second Life. In the spring semester of 2007, Texas A&M's department of recreation, park and tourism sciences started using the virtual world to run scenarios of park ranger exercises.
Second Life is being evaluated by several instructors, 1,800 of whom met at an in-world conference in May to discuss educational best practices.
The popular virtual world is of particular interest to universities making substantial revenue from online degrees.
Walden University faculty member Kevin Jarrett, who teaches an online master's course in education, won a $10,000 grant to spend six months researching Second Life's educational potential.
"It's one thing to look at a discussion board, wikis and blogs. It's something else completely different to physically act in a 3D environment with others in your class. There is increased engagement and feelings of identity," said Jarrett.
Hartman, a member of Drexel's Second Life committee, says his school's presence is a marketing tool right now, but that in-world classes are probably only three years away.
"Just like with hybrids and the car industry a few years ago, I need to start building that car because if I wait three years, I'll miss that curve," Hartman said. "I'm building it now as a prototype, but I don't expect to take it out and race it."
Edit By: Candace Lombardi
[PC Technology]Accelerate Windows by Tweaking Virtual Memory
These tricks can pep up your computer; plus, download a free Mac OS X skin.
If you poke around tip sites, you'll find a lot of myths and harebrained theories about optimizing virtual memory (the hard-disk space Windows uses to supplement your RAM)--a few of them even perpetuated by me. This time I went to the horse's mouth for the Microsoft-approved ways to set Windows' memory management to full steam ahead.
If you have only one hard drive, just leave well enough alone. But if you have two or more internal or external hard drives (not just disk partitions), your PC will be peppier if you keep the default paging file (what Microsoft calls the virtual memory disk space) on your boot drive (the one that holds Windows) and add a paging file to the second drive.
To do so, log in to Windows as an administrator and verify that you have more than one hard drive in your computer: Click Start, Run (just Start in Vista), type diskmgmt.msc, and press to open the Disk Management utility (click Continue in the User Account Control, if necessary). The bottom pane shows each disk on your system and the drive letter that corresponds with each partition. To have only one new paging file, choose the fastest drive you have. Remember that an internal drive will be faster than an external drive in most cases. Note the drive letter(s) you'll use.
Now right-click My Computer (Windows 2000 and XP) or Computer (Vista) and choose Properties. In Windows 2000 and XP, select the Advanced tab; in Vista, pick Advanced system settings in the task pane on the left.
Bonus tip: In Vista, you can open the System Properties dialog box directly to the Advanced tab by clicking Start, typing systempropertiesadvanced, and pressing. As with the preceding method, you may have to click Continue in the User Account Control dialog box.
In the Performance section, click Settings (Performance Options in Windows 2000) and then the Advanced tab (in XP and Vista). Under Virtual Memory, click Change. In Vista, uncheck Automatically manage paging file size for all drives. You'll see a paging file size already listed on your Windows drive; leave it alone, or Windows won't be able to create a memory dump file with debugging info in the event of a particular type of system error.
Next, in the drive list select a partition on a different drive where you want to add another paging file. Select Custom size if you want to set the size yourself and type in the initial and maximum size (Microsoft says making them the same amount is most efficient); Microsoft's rule of thumb is to make the file 1.5 times the amount of RAM in your system. Or select System managed size to let Windows determine the size (XP and Vista only). Click Set, then OK.
If the partition you selected contains another installation of Windows, you'll receive an error message warning that the file pagefile.sys already exists there. As long as the two operating systems are not running at the same time using virtualization software, it's safe for you to overwrite or delete pagefile.sys, since Windows will re-create the file automatically the next time you boot that partition's Windows installation.
You'll see a reminder that the changes will take effect the next time you restart your system. Windows will most often use the paging file on the least-busy drive, which means your new paging file will do most of the work.
Edit By: Scott Dunn
If you poke around tip sites, you'll find a lot of myths and harebrained theories about optimizing virtual memory (the hard-disk space Windows uses to supplement your RAM)--a few of them even perpetuated by me. This time I went to the horse's mouth for the Microsoft-approved ways to set Windows' memory management to full steam ahead.
If you have only one hard drive, just leave well enough alone. But if you have two or more internal or external hard drives (not just disk partitions), your PC will be peppier if you keep the default paging file (what Microsoft calls the virtual memory disk space) on your boot drive (the one that holds Windows) and add a paging file to the second drive.
To do so, log in to Windows as an administrator and verify that you have more than one hard drive in your computer: Click Start, Run (just Start in Vista), type diskmgmt.msc, and press
Now right-click My Computer (Windows 2000 and XP) or Computer (Vista) and choose Properties. In Windows 2000 and XP, select the Advanced tab; in Vista, pick Advanced system settings in the task pane on the left.
Bonus tip: In Vista, you can open the System Properties dialog box directly to the Advanced tab by clicking Start, typing systempropertiesadvanced, and pressing
In the Performance section, click Settings (Performance Options in Windows 2000) and then the Advanced tab (in XP and Vista). Under Virtual Memory, click Change. In Vista, uncheck Automatically manage paging file size for all drives. You'll see a paging file size already listed on your Windows drive; leave it alone, or Windows won't be able to create a memory dump file with debugging info in the event of a particular type of system error.
Next, in the drive list select a partition on a different drive where you want to add another paging file. Select Custom size if you want to set the size yourself and type in the initial and maximum size (Microsoft says making them the same amount is most efficient); Microsoft's rule of thumb is to make the file 1.5 times the amount of RAM in your system. Or select System managed size to let Windows determine the size (XP and Vista only). Click Set, then OK.
If the partition you selected contains another installation of Windows, you'll receive an error message warning that the file pagefile.sys already exists there. As long as the two operating systems are not running at the same time using virtualization software, it's safe for you to overwrite or delete pagefile.sys, since Windows will re-create the file automatically the next time you boot that partition's Windows installation.
You'll see a reminder that the changes will take effect the next time you restart your system. Windows will most often use the paging file on the least-busy drive, which means your new paging file will do most of the work.
Edit By: Scott Dunn
8/03/2007
[Virus Security]Diebold voting machines vulnerable to virus attack
Only limited access needed to wreak havoc on an election
August 03, 2007 - Diebold Election Systems Inc. electronic voting machines are not secure enough to guarantee a trustworthy election, and an attacker with access to a single machine could disrupt or change the outcome of an election using viruses, according to a review of Diebold's source code.
"The software contains serious design flaws that have led directly to specific vulnerabilities that attackers could exploit to affect election outcomes," read the University of California at Berkeley report, commissioned by the California secretary of State as part of a two-month "top-to-bottom" review of electronic voting systems certified for use in California.
The assessment of Diebold's source code revealed an attacker needs only limited access to compromise an election.
"An attack could plausibly be accomplished by a single skilled individual with temporary access to a single voting machine. The damage could be extensive -- malicious code could spread to every voting machine in polling places and to county election servers," it said.
The report (PDF format), titled "Source Code Review of the Diebold Voting System," was apparently released Thursday, just one day before California Secretary of State Debra Bowen is to decide which machines are certified for use in California's 2008 presidential primary elections.
The source-code review identified four main weaknesses in Diebold's software, including: vulnerabilities that allow an attacker to install malware on the machines, a failure to guarantee the secrecy of ballots, a lack of controls to prevent election workers from tampering with ballots and results, and susceptibility to viruses that could allow attackers to an influence an election.
"A virus could allow an attacker who only had access to a few machines or memory cards, or possibly to only one, to spread malicious software to most, if not all, of a county's voting machines," the report said. "Thus, large-scale election fraud in the Diebold system does not necessarily require physical access to a large number of voting machines."
The report warned that a paper trail of votes cast is not sufficient to guarantee the integrity of an election using the machines. "Malicious code might be able to subtly influence close elections, and it could disrupt elections by causing widespread equipment failure on election day," it said.
The source-code review went on to warn that commercial antivirus scanners do not offer adequate protection for the voting machines. "They are not designed to detect virally propagating malicious code that targets voting equipment and voting software," it said.
In conclusion, the report said Diebold's voting machines had not been designed with security as a priority. "For this reason, the safest way to repair the Diebold system is to reengineer it so that it is secure by design," it said.
The Diebold source-code review and several other documents, including a review of source code used in other voting systems, had earlier been withheld from release by the Secretary of State, even as other reports related to the review of voting machines were released on July 27.
An explanation posted on the secretary of State's Web site on July 27 noted that the source-code review and other reports had been submitted on time. "Their reports will be posted as soon as the Secretary of State ensures the reports do not inadvertently disclose security-sensitive information," the Web site said.
The delayed release of the source-code review meant that David Wagner, an associate professor of computer science at the University of California, Berkeley, and an author of the report, was not able to present his findings at a public hearing held on July 30 to discuss the results of the voting system review.
Edit By: Sumner Lemon
August 03, 2007 - Diebold Election Systems Inc. electronic voting machines are not secure enough to guarantee a trustworthy election, and an attacker with access to a single machine could disrupt or change the outcome of an election using viruses, according to a review of Diebold's source code.
"The software contains serious design flaws that have led directly to specific vulnerabilities that attackers could exploit to affect election outcomes," read the University of California at Berkeley report, commissioned by the California secretary of State as part of a two-month "top-to-bottom" review of electronic voting systems certified for use in California.
The assessment of Diebold's source code revealed an attacker needs only limited access to compromise an election.
"An attack could plausibly be accomplished by a single skilled individual with temporary access to a single voting machine. The damage could be extensive -- malicious code could spread to every voting machine in polling places and to county election servers," it said.
The report (PDF format), titled "Source Code Review of the Diebold Voting System," was apparently released Thursday, just one day before California Secretary of State Debra Bowen is to decide which machines are certified for use in California's 2008 presidential primary elections.
The source-code review identified four main weaknesses in Diebold's software, including: vulnerabilities that allow an attacker to install malware on the machines, a failure to guarantee the secrecy of ballots, a lack of controls to prevent election workers from tampering with ballots and results, and susceptibility to viruses that could allow attackers to an influence an election.
"A virus could allow an attacker who only had access to a few machines or memory cards, or possibly to only one, to spread malicious software to most, if not all, of a county's voting machines," the report said. "Thus, large-scale election fraud in the Diebold system does not necessarily require physical access to a large number of voting machines."
The report warned that a paper trail of votes cast is not sufficient to guarantee the integrity of an election using the machines. "Malicious code might be able to subtly influence close elections, and it could disrupt elections by causing widespread equipment failure on election day," it said.
The source-code review went on to warn that commercial antivirus scanners do not offer adequate protection for the voting machines. "They are not designed to detect virally propagating malicious code that targets voting equipment and voting software," it said.
In conclusion, the report said Diebold's voting machines had not been designed with security as a priority. "For this reason, the safest way to repair the Diebold system is to reengineer it so that it is secure by design," it said.
The Diebold source-code review and several other documents, including a review of source code used in other voting systems, had earlier been withheld from release by the Secretary of State, even as other reports related to the review of voting machines were released on July 27.
An explanation posted on the secretary of State's Web site on July 27 noted that the source-code review and other reports had been submitted on time. "Their reports will be posted as soon as the Secretary of State ensures the reports do not inadvertently disclose security-sensitive information," the Web site said.
The delayed release of the source-code review meant that David Wagner, an associate professor of computer science at the University of California, Berkeley, and an author of the report, was not able to present his findings at a public hearing held on July 30 to discuss the results of the voting system review.
Edit By: Sumner Lemon
[News]DefCon: Hacking meetup infiltrated by Dateline 'media mole'
Chatterboxes beware: Tabloid-TV producer alleged to be in the house
August 03, 2007 - Trust nobody.
That's what organizers of the 15th annual DefCon hacking conference are telling attendees Friday, after being tipped off that the TV news program Dateline NBC has sent a producer with a hidden camera to investigate the show.
Cameras of any kind are a strict no-no at the show, which bills itself as a gathering for hackers, both legitimate, and not-so-legitimate, and takes special steps to ensure the privacy of its attendees. The show keeps no list of attendees, except for press and speakers, and there's only one way to get in the door: paying $100 cash.
DefCon organizer Dark Tangent (a.k.a Jeff Moss) said that he's concerned that the show's producers may sensationalize what they see and undermine the show's goal of fostering a free exchange of ideas. "We researched them online and we see [the show's producers] do hit-and-run pieces," he said. "It's not actually research and news. It's just sensationalistic nonsense. And that makes us nervous."
Moss says he's been told that Dateline Field Producer Michelle Madigan is at the show with a hidden camera. NBC did not immediately reply to a request for comment.
Media and bloggers have gone undercover at DefCon in the past, but nobody of the stature of NBC has ever tried this, Moss said.
"I'm concerned that some impressionable kid... is just going to get cornered and is going to start bragging about stuff," he said. "The next thing you know, he's on nightly news."
DefCon runs through Sunday at the Riviera Hotel and Casino in Las Vegas.
Edit By: Robert McMillan
August 03, 2007 - Trust nobody.
That's what organizers of the 15th annual DefCon hacking conference are telling attendees Friday, after being tipped off that the TV news program Dateline NBC has sent a producer with a hidden camera to investigate the show.
Cameras of any kind are a strict no-no at the show, which bills itself as a gathering for hackers, both legitimate, and not-so-legitimate, and takes special steps to ensure the privacy of its attendees. The show keeps no list of attendees, except for press and speakers, and there's only one way to get in the door: paying $100 cash.
DefCon organizer Dark Tangent (a.k.a Jeff Moss) said that he's concerned that the show's producers may sensationalize what they see and undermine the show's goal of fostering a free exchange of ideas. "We researched them online and we see [the show's producers] do hit-and-run pieces," he said. "It's not actually research and news. It's just sensationalistic nonsense. And that makes us nervous."
Moss says he's been told that Dateline Field Producer Michelle Madigan is at the show with a hidden camera. NBC did not immediately reply to a request for comment.
Media and bloggers have gone undercover at DefCon in the past, but nobody of the stature of NBC has ever tried this, Moss said.
"I'm concerned that some impressionable kid... is just going to get cornered and is going to start bragging about stuff," he said. "The next thing you know, he's on nightly news."
DefCon runs through Sunday at the Riviera Hotel and Casino in Las Vegas.
Edit By: Robert McMillan
[Virus Security]An antidote for the Blue Pill?
At Black Hat, questions swirl around VM rootkit detection
August 03, 2007 - Can rootkit malware that hides by mimicking a software-based virtual machine ever be detected? That was the topic of debate as security researchers presented their latest findings to packed audiences at the Black Hat conference here.
Joanna Rutkowska, researcher at the firm Invisible Things, was the one who famously ignited the keen interest in virtualized rootkits after she described and demonstrated her rootkit creation, called Blue Pill, at last year's Black Hat.
Wednesday, Rutkowska returned to Black Hat to acknowledge that researcher Edgar Barbosa has come the closest to devising a method for detecting Blue Pill. "Congratulations to Edgar," she said, during the highly technical presentation she made with her colleague, researcher Alexander Tereshkin. Rutkowska said she and her colleague hadn't found a way yet to evade Barbosa's so-called counterbased detection method as detailed in a paper he made public in July at the SyScan conference.
Rutkowska also said she is posting the Blue Pill code publicly for download at the Blue Pill Project Web site. "You can freely upload Blue Pill right now," she said. Blue Pill has been developed in a number of variants since last year, including one based on nested hypervisors, where stealth, virtual-machine malware is nested inside other stealth, virtual-machine malware.
On a separate topic, she faulted Microsoft's code-signing security that requires a Microsoft-approved signed certificate for kernel-mode protection. Rutkowska last year had shown a way to break that security, which would let an attacker load malware on 64-bit Vista, but Microsoft fixed that problem a few months ago by changing an API. However, she asserted on Wednesday that she and Tereshkin had uncovered another route around Vista kernel protection: Faulty third-party drivers, which although digitally signed, are simply vulnerable.
She also noted that it was all too simple to obtain a Microsoft-approved code-signing certificate through a largely automated process that cost $250 for a certificate. Microsoft was not immediately available to comment on Rutkowska's findings.
At an earlier session at Black Hat titled "Don't Tell Joanna, the Virtualized Rootkit is Dead," researchers Thomas Ptacek from Matasano Security, Nate Lawson from Root Labs, and Peter Ferrie from Symantec, labored to describe how they are on the path to detecting virtual-machine malware through three technical approaches. They described these technical approaches as side-channel attack, vantage-point attack and performance event counters.
In the end, however, Ptacek said the research was focused on detecting the presence of virtualization malware called Vitriol, created by researcher Dino Dai Zovi, for VMware. That's because Vitriol is one of only a few known examples of virtualization malware, and Rutkowska had declined to supply any Blue Pill code before the conference.
The three researchers indicated they intend to release their published findings, as well as a software framework they call Samsara for detecting virtualization malware, within a few days.
Edit By: Ellen Messmer
August 03, 2007 - Can rootkit malware that hides by mimicking a software-based virtual machine ever be detected? That was the topic of debate as security researchers presented their latest findings to packed audiences at the Black Hat conference here.
Joanna Rutkowska, researcher at the firm Invisible Things, was the one who famously ignited the keen interest in virtualized rootkits after she described and demonstrated her rootkit creation, called Blue Pill, at last year's Black Hat.
Wednesday, Rutkowska returned to Black Hat to acknowledge that researcher Edgar Barbosa has come the closest to devising a method for detecting Blue Pill. "Congratulations to Edgar," she said, during the highly technical presentation she made with her colleague, researcher Alexander Tereshkin. Rutkowska said she and her colleague hadn't found a way yet to evade Barbosa's so-called counterbased detection method as detailed in a paper he made public in July at the SyScan conference.
Rutkowska also said she is posting the Blue Pill code publicly for download at the Blue Pill Project Web site. "You can freely upload Blue Pill right now," she said. Blue Pill has been developed in a number of variants since last year, including one based on nested hypervisors, where stealth, virtual-machine malware is nested inside other stealth, virtual-machine malware.
On a separate topic, she faulted Microsoft's code-signing security that requires a Microsoft-approved signed certificate for kernel-mode protection. Rutkowska last year had shown a way to break that security, which would let an attacker load malware on 64-bit Vista, but Microsoft fixed that problem a few months ago by changing an API. However, she asserted on Wednesday that she and Tereshkin had uncovered another route around Vista kernel protection: Faulty third-party drivers, which although digitally signed, are simply vulnerable.
She also noted that it was all too simple to obtain a Microsoft-approved code-signing certificate through a largely automated process that cost $250 for a certificate. Microsoft was not immediately available to comment on Rutkowska's findings.
At an earlier session at Black Hat titled "Don't Tell Joanna, the Virtualized Rootkit is Dead," researchers Thomas Ptacek from Matasano Security, Nate Lawson from Root Labs, and Peter Ferrie from Symantec, labored to describe how they are on the path to detecting virtual-machine malware through three technical approaches. They described these technical approaches as side-channel attack, vantage-point attack and performance event counters.
In the end, however, Ptacek said the research was focused on detecting the presence of virtualization malware called Vitriol, created by researcher Dino Dai Zovi, for VMware. That's because Vitriol is one of only a few known examples of virtualization malware, and Rutkowska had declined to supply any Blue Pill code before the conference.
The three researchers indicated they intend to release their published findings, as well as a software framework they call Samsara for detecting virtualization malware, within a few days.
Edit By: Ellen Messmer
[Wirelee Network]Comba Develops Remote Radio Unit for China 3G Market
The RRU is a fiber fed radio that extends the coverage of base stations, yet maintains an extremely small footprint. With the Chinese telecoms market bracing itself for 3G licensing, Comba’s RRU solution will play a key role in helping the rapid roll out of TD-SCDMA networks.
The company also signed a supply contract with a launch customer and announced that it has received an initial order for the new TD-SCDMA RRU from a major TD-SCDMA base station manufacturer. With a total value in excess of USD 1 million, Comba’s RRU is being deployed in 3G network trials within China.
Typically deployed in combination with base stations, Comba's RRU enables operators to roll out TD-SCDMA networks effectively and rapidly whilst maximizing capacity and coverage. With up to 30 percent cost savings, the RRU deployment solution compares favorably to the traditional approach of a pure base station network buildout.
Featuring one of the market’s smaller remote units, the RRU can be deployed in difficult-to-reach sites in a variety of formations to suit different coverage requirements without signal degradation. For example, a series of RRU may be deployed in linear formation to enable TD-SCDMA networks along a highway, or in a matrix formation to cover a city block.
Developed in-house by Comba's R&D division in Guangzhou, the RRU complements Comba's portfolio of TD-SCDMA solutions that ranges from tower mounted solutions and repeaters to complete in-building coverage systems that have already been deployed in numerous 3G trials throughout China.
Edit By: Horia Covaci
[Security News]Siemens and Fujitsu to Collaborate in Biometric Authentication Area
Siemens and Fujitsu announced a business collaboration designed to expand the market for the PalmSecure palm vein authentication system developed by Fujitsu in combination with Siemens’ biometric software “ID Center”.
Siemens IT Solutions and Services and Fujitsu have combined their technologies to develop a new biometric IT solution for personal recognition using palm vein scanning. The combination of the Siemens biometric software “ID Center” and the Fujitsu “PalmSecure” device recognizes the vein pattern under the skin on the palm of the hand that is unique to each individual. The palm is scanned without direct contact by an infrared scanner. During this process the captured hand vein pattern is verified against a preregistered pattern for the person being scanned. It is practically impossible to forge the identity since the veins are under the skin.
Fujitsu provides “PalmSecure” to authenticate individuals in banks, hospitals, universities and municipal authorities. The Siemens biometric software “ID Center” provides extremely secure administration of the data collected with this type of application and has already been installed worldwide by numerous customers in public sector, healthcare, and industry segments.
This biometrics partnership with Siemens allows Fujitsu to open up, in particular, the European market. For Siemens, this cooperation adds further new component to its offering of biometric authentication which previously included face, iris and fingerprint recognition.
Edit By: Horia Covaci
Siemens IT Solutions and Services and Fujitsu have combined their technologies to develop a new biometric IT solution for personal recognition using palm vein scanning. The combination of the Siemens biometric software “ID Center” and the Fujitsu “PalmSecure” device recognizes the vein pattern under the skin on the palm of the hand that is unique to each individual. The palm is scanned without direct contact by an infrared scanner. During this process the captured hand vein pattern is verified against a preregistered pattern for the person being scanned. It is practically impossible to forge the identity since the veins are under the skin.
Fujitsu provides “PalmSecure” to authenticate individuals in banks, hospitals, universities and municipal authorities. The Siemens biometric software “ID Center” provides extremely secure administration of the data collected with this type of application and has already been installed worldwide by numerous customers in public sector, healthcare, and industry segments.
This biometrics partnership with Siemens allows Fujitsu to open up, in particular, the European market. For Siemens, this cooperation adds further new component to its offering of biometric authentication which previously included face, iris and fingerprint recognition.
Edit By: Horia Covaci
8/02/2007
[Virus Security]Malware Hunts Down and Deletes MP3s
Low-risk worm deletes MP3s on infected PCs, spreads via removable flash drives.
Security experts have discovered a worm that might just be the recording industry's dream application: it hunts down and deletes MP3s on infected PCs.
Security companies say the worm is only low risk, although its unusual payload could give a nasty surprise to an ardent music fan. The motivation of the hackers who created it are unclear.
"The authors of this worm are more likely to be teenage mischief makers than the organized criminal gangs we typically see authoring financially-motivated malware these days," said Graham Cluley, senior technology consultant for the security vendor Sophos PLC.
"As such, it's not something we would lose an awful lot of sleep over, but there are some lessons that computer users should learn to minimize the chances of infection," he said.
The worm spreads via removable flash drives, reminiscent of the way viruses spread via floppy disks decades ago. That may be an attempt by the authors of the worm to bypass e-mail filters and Web gateway filters that block malicious software, Cluley said.
Symantec Corp., which calls the worm W32.Deletemusic, said in an advisory that the worm copies itself to all drives on a PC. It also creates an autorun file to start itself whenever a user accesses a drive.
The worm affects PCs running Windows 2000, 95, 98, Me, NT, Server 2003, XP and Vista, Symantec said. Users could disable the autorun feature in Windows that automatically launches programs on CDs or USB drives, Cluley said.
It's not the first malicious software to go after music files. Two years ago, researchers saw the Nopir-B worm, which posed as a utility to make copies of DVDs. Once on a machine, it displayed an anti-piracy graphic and tried to delete MP3s and other files.
Last year, a Trojan horse program called Erazer took the destructive activity a step further, wiping out MP3s as well as movies, Cluley said.
Edit By: Jeremy Kirk
Security experts have discovered a worm that might just be the recording industry's dream application: it hunts down and deletes MP3s on infected PCs.
Security companies say the worm is only low risk, although its unusual payload could give a nasty surprise to an ardent music fan. The motivation of the hackers who created it are unclear.
"The authors of this worm are more likely to be teenage mischief makers than the organized criminal gangs we typically see authoring financially-motivated malware these days," said Graham Cluley, senior technology consultant for the security vendor Sophos PLC.
"As such, it's not something we would lose an awful lot of sleep over, but there are some lessons that computer users should learn to minimize the chances of infection," he said.
The worm spreads via removable flash drives, reminiscent of the way viruses spread via floppy disks decades ago. That may be an attempt by the authors of the worm to bypass e-mail filters and Web gateway filters that block malicious software, Cluley said.
Symantec Corp., which calls the worm W32.Deletemusic, said in an advisory that the worm copies itself to all drives on a PC. It also creates an autorun file to start itself whenever a user accesses a drive.
The worm affects PCs running Windows 2000, 95, 98, Me, NT, Server 2003, XP and Vista, Symantec said. Users could disable the autorun feature in Windows that automatically launches programs on CDs or USB drives, Cluley said.
It's not the first malicious software to go after music files. Two years ago, researchers saw the Nopir-B worm, which posed as a utility to make copies of DVDs. Once on a machine, it displayed an anti-piracy graphic and tried to delete MP3s and other files.
Last year, a Trojan horse program called Erazer took the destructive activity a step further, wiping out MP3s as well as movies, Cluley said.
Edit By: Jeremy Kirk
[Network Security]Mozilla Giving Away Security Testing Tools
Mozilla is releasing some of its own security tools to the open-source community.
Mozilla Corp. will release some of its homegrown security tools to the open-source community, the company's head of security said Wednesday, starting with a "fuzzer" it uses to pin down JavaScript bugs in Firefox.
The JavaScript fuzzer, said Window Snyder, Mozilla's security chief since last September, will be handed over Thursday morning, following a presentation at Black Hat, a two-day security conference that opened Wednesday in Las Vegas.
"We're announcing that we'll be sharing our tools with the community, and releasing the JavaScript fuzzer then," said Snyder. Other tools will follow, including fuzzers that stress-test the HTTP and FTP protocols. Those two tools, however, are not ready to offer to outsiders, largely because Mozilla wants to wrap up talks with other browser vendors before they are shared.
Fuzzing, a technique used by both white- and black-hat researchers trolling for vulnerabilities, and by developers to finger flaws in their code before it goes public, drops data into applications or operating system components to see if -- and where -- breakdowns occur. Typically, the process is automated with a fuzzer, the term for software that hammers on application inputs. The JavaScript fuzzer, Snyder said, has identified "dozens" of vulnerabilities in Firefox code.
Snyder said Firefox developers have created many tools, and though a lot of them are small, special-purpose ones, all of them could be useful to others.
"We want to make the work we're already doing available to other people and to other products" in the hope that the tools might help developers outside Mozilla spot problems in their code, she said. Snyder sees a direct benefit to Mozilla, too. The more people who bang on the tool, tweak it and modify it, the better the tools should become, she said.
She seemed unconcerned that any tool Mozilla released would prove a significant danger to users. Although hackers also use fuzzers in their vulnerability-sniffing tool kits, "the tool isn't bad or good on its own," Snyder argued. "They use debuggers all the time. Debuggers aren't bad" because of that.
Mozilla might have wished it had fuzzed Firefox a bit more over the past three weeks, when it was caught in a name-calling contest between it and Microsoft Corp. supporters. Early last month, Danish researcher Thor Larholm found what he said was a critical input-validation bug in Internet Explorer that let the browser pass potentially malicious URLs to other programs, including Firefox. He laid blame on IE, while other security experts said it was Firefox's fault.
Shortly after that, Snyder hinted that she saw the whole mess as an IE problem, but within days acknowledged that Firefox was guilty of the same behavior. "We thought this was just a problem with IE," she said July 23. "It turns out, it is a problem with Firefox as well."
Wednesday, she said that the very public disagreements between security experts as to which browser was to blame had actually been a good thing. "Debate is healthy," she said. "And if we're wrong, we say we're wrong and move on."
Mozilla updated Firefox twice in July, first on July 17 with 2.0.0.5, and then Monday when it released Version 2.0.0.6. Both updates included fixes for the URL protocol handling bug that started the brouhaha. "We weren't twiddling our thumbs during all of this," said Snyder. "We were also on the back end moving forward with fixes."
At Black Hat, Snyder and fellow Mozilla executive Mike Shaver, the company's technology strategist, also plan to discuss the new security features of Firefox 3, the major update that currently is in preview testing. Firefox 3 is expected to ship sometime this year.
Edit By: Gregg Keizer
Mozilla Corp. will release some of its homegrown security tools to the open-source community, the company's head of security said Wednesday, starting with a "fuzzer" it uses to pin down JavaScript bugs in Firefox.
The JavaScript fuzzer, said Window Snyder, Mozilla's security chief since last September, will be handed over Thursday morning, following a presentation at Black Hat, a two-day security conference that opened Wednesday in Las Vegas.
"We're announcing that we'll be sharing our tools with the community, and releasing the JavaScript fuzzer then," said Snyder. Other tools will follow, including fuzzers that stress-test the HTTP and FTP protocols. Those two tools, however, are not ready to offer to outsiders, largely because Mozilla wants to wrap up talks with other browser vendors before they are shared.
Fuzzing, a technique used by both white- and black-hat researchers trolling for vulnerabilities, and by developers to finger flaws in their code before it goes public, drops data into applications or operating system components to see if -- and where -- breakdowns occur. Typically, the process is automated with a fuzzer, the term for software that hammers on application inputs. The JavaScript fuzzer, Snyder said, has identified "dozens" of vulnerabilities in Firefox code.
Snyder said Firefox developers have created many tools, and though a lot of them are small, special-purpose ones, all of them could be useful to others.
"We want to make the work we're already doing available to other people and to other products" in the hope that the tools might help developers outside Mozilla spot problems in their code, she said. Snyder sees a direct benefit to Mozilla, too. The more people who bang on the tool, tweak it and modify it, the better the tools should become, she said.
She seemed unconcerned that any tool Mozilla released would prove a significant danger to users. Although hackers also use fuzzers in their vulnerability-sniffing tool kits, "the tool isn't bad or good on its own," Snyder argued. "They use debuggers all the time. Debuggers aren't bad" because of that.
Mozilla might have wished it had fuzzed Firefox a bit more over the past three weeks, when it was caught in a name-calling contest between it and Microsoft Corp. supporters. Early last month, Danish researcher Thor Larholm found what he said was a critical input-validation bug in Internet Explorer that let the browser pass potentially malicious URLs to other programs, including Firefox. He laid blame on IE, while other security experts said it was Firefox's fault.
Shortly after that, Snyder hinted that she saw the whole mess as an IE problem, but within days acknowledged that Firefox was guilty of the same behavior. "We thought this was just a problem with IE," she said July 23. "It turns out, it is a problem with Firefox as well."
Wednesday, she said that the very public disagreements between security experts as to which browser was to blame had actually been a good thing. "Debate is healthy," she said. "And if we're wrong, we say we're wrong and move on."
Mozilla updated Firefox twice in July, first on July 17 with 2.0.0.5, and then Monday when it released Version 2.0.0.6. Both updates included fixes for the URL protocol handling bug that started the brouhaha. "We weren't twiddling our thumbs during all of this," said Snyder. "We were also on the back end moving forward with fixes."
At Black Hat, Snyder and fellow Mozilla executive Mike Shaver, the company's technology strategist, also plan to discuss the new security features of Firefox 3, the major update that currently is in preview testing. Firefox 3 is expected to ship sometime this year.
Edit By: Gregg Keizer
[Security News]Credit Cards Ranked on Security as Fraud Fears Grow
What's in your wallet may not be the most secure, antifraud credit card available. The top safety scorecard honor went to Bank of America's Visa Platinum card.
What's in your wallet may not be the most secure, antifraud credit card available.
A new study of credit cards from 25 of the largest issuers found that many still fall short of protecting users from fraud.
The report, released by Javelin Strategy & Research, a Pleasanton, Calif.-based financial services research firm, found that while almost all card issuers do well in helping their customers after fraud or theft occurs, many need to upgrade their identity fraud detection tools.
Among the key deficiencies:
-- 56 percent of the 25 card issuers surveyed continue to require full Social Security numbers to help identify their customers, whether by phone, online or by mail. "This is a risky practice that unnecessarily increases the customer's exposure to identity fraud," the report states.
-- Consumers are not allowed to set transaction limits or block certain types of transactions using their credit cards, such as restricting card use to purchases only made with U.S. vendors, according to the study. In fact, only 24 percent of the surveyed card issuers allow consumers to set so-called user-defined limits and/or prohibitions (UDLAPs) on their accounts to help prevent unauthorized use, the study concluded.
-- While more card issuers now offer consumers e-mail or telephone "transaction alerts" to advise them of account activity, the number of participating card companies is still small -- about 8 percent.
Not all of the news is bad, however.
Customers do appear to be safer logging into their accounts online than they have been in the past, because of the widespread use of multifactor log-in processes, which require a username, password, identifying information such as photograph placed by the user and a correct answer for a challenge question, according to the study. More than 80 percent of the surveyed card issuers are now using authentication processes with a multifactor approach.
The Javelin report rated the card issuers using three criteria: prevention, detection and resolution. The top safety scorecard honor went to Bank of America's Visa Platinum card, which received 69 out of a possible 80 points, earning high marks for prevention techniques. The American Express Blue card finished second with 66 points, winning high ratings for detection protections for cardholders.
Two card issuers tied for third place with 64 points each -- the Discover Platinum Card and First National Bank Omaha's Platinum Edition Visa Card.
Rachel Kim, a Javelin risk and fraud analyst who wrote the study, said credit card security continues to evolve. "We're seeing that issuers are always going to be doing a great job in resolution," she said. "But detection is where they need to amp up their efforts."
The continued use of easy-to-steal and easy-to-obtain Social Security numbers as identification criteria by credit card issuers is "pretty scary," Kim said. "They don't have any need to use the entire Social Security number. They can just use the last four digits. It's just something they have been doing for so long. I am sure that over the next few years we will see a decrease in usage."
Another step that more card issuers need to take is to provide an alert system for customers to quickly determine if a credit card is being used without authorization or if personal information, including passwords or addresses, is being changed fraudulently. "We definitely see a need for more issuers to offer alerts for changes in personal information," Kim said. "You should definitely be sent an e-mail alert if your password is changed."
She also called on card issuers to provide additional UDLAP options for customers.
This was the third annual Javelin report on card security, but changes in methodology this year don't allow easy comparisons to past reports, Kim said.
The Javelin study was conducted anonymously using a "mystery shopper" approach between April 15 and June 15 of this year through interviews by Javelin researchers with card issuer customer service representatives and through reviews of card issuer Web sites.
The card issuers surveyed by Javelin for the report were: Advanta, American Express, Bank of America, BB&T, RBS National, Capital One, Citibank, Commerce Bank, Discover, Fifth Third Bank, FNB Omaha, GE, Washington Mutual, Wells Fargo HSBC, National City Bank, Navy Federal Credit Union, Nordstrom Bank, JPMorgan Chase, State Farm Bank, SunTrust Banks, Target, US Bancorp, USAA and Wachovia.
Although Barclays is a top 25 issuer, the researchers were unable to complete the interviews because of the absence of call centers. As a result, Barclays was removed from the list of surveyed issuers, with SunTrust Banks Inc. taking its place, according to Javelin.
Edit By: Todd R. Weiss
What's in your wallet may not be the most secure, antifraud credit card available.
A new study of credit cards from 25 of the largest issuers found that many still fall short of protecting users from fraud.
The report, released by Javelin Strategy & Research, a Pleasanton, Calif.-based financial services research firm, found that while almost all card issuers do well in helping their customers after fraud or theft occurs, many need to upgrade their identity fraud detection tools.
Among the key deficiencies:
-- 56 percent of the 25 card issuers surveyed continue to require full Social Security numbers to help identify their customers, whether by phone, online or by mail. "This is a risky practice that unnecessarily increases the customer's exposure to identity fraud," the report states.
-- Consumers are not allowed to set transaction limits or block certain types of transactions using their credit cards, such as restricting card use to purchases only made with U.S. vendors, according to the study. In fact, only 24 percent of the surveyed card issuers allow consumers to set so-called user-defined limits and/or prohibitions (UDLAPs) on their accounts to help prevent unauthorized use, the study concluded.
-- While more card issuers now offer consumers e-mail or telephone "transaction alerts" to advise them of account activity, the number of participating card companies is still small -- about 8 percent.
Not all of the news is bad, however.
Customers do appear to be safer logging into their accounts online than they have been in the past, because of the widespread use of multifactor log-in processes, which require a username, password, identifying information such as photograph placed by the user and a correct answer for a challenge question, according to the study. More than 80 percent of the surveyed card issuers are now using authentication processes with a multifactor approach.
The Javelin report rated the card issuers using three criteria: prevention, detection and resolution. The top safety scorecard honor went to Bank of America's Visa Platinum card, which received 69 out of a possible 80 points, earning high marks for prevention techniques. The American Express Blue card finished second with 66 points, winning high ratings for detection protections for cardholders.
Two card issuers tied for third place with 64 points each -- the Discover Platinum Card and First National Bank Omaha's Platinum Edition Visa Card.
Rachel Kim, a Javelin risk and fraud analyst who wrote the study, said credit card security continues to evolve. "We're seeing that issuers are always going to be doing a great job in resolution," she said. "But detection is where they need to amp up their efforts."
The continued use of easy-to-steal and easy-to-obtain Social Security numbers as identification criteria by credit card issuers is "pretty scary," Kim said. "They don't have any need to use the entire Social Security number. They can just use the last four digits. It's just something they have been doing for so long. I am sure that over the next few years we will see a decrease in usage."
Another step that more card issuers need to take is to provide an alert system for customers to quickly determine if a credit card is being used without authorization or if personal information, including passwords or addresses, is being changed fraudulently. "We definitely see a need for more issuers to offer alerts for changes in personal information," Kim said. "You should definitely be sent an e-mail alert if your password is changed."
She also called on card issuers to provide additional UDLAP options for customers.
This was the third annual Javelin report on card security, but changes in methodology this year don't allow easy comparisons to past reports, Kim said.
The Javelin study was conducted anonymously using a "mystery shopper" approach between April 15 and June 15 of this year through interviews by Javelin researchers with card issuer customer service representatives and through reviews of card issuer Web sites.
The card issuers surveyed by Javelin for the report were: Advanta, American Express, Bank of America, BB&T, RBS National, Capital One, Citibank, Commerce Bank, Discover, Fifth Third Bank, FNB Omaha, GE, Washington Mutual, Wells Fargo HSBC, National City Bank, Navy Federal Credit Union, Nordstrom Bank, JPMorgan Chase, State Farm Bank, SunTrust Banks, Target, US Bancorp, USAA and Wachovia.
Although Barclays is a top 25 issuer, the researchers were unable to complete the interviews because of the absence of call centers. As a result, Barclays was removed from the list of surveyed issuers, with SunTrust Banks Inc. taking its place, according to Javelin.
Edit By: Todd R. Weiss
[Security News]Black Hat: NSA guru lauds security intelligence sharing
Private and public concerns alike benefit from cooperation, says Tony Stager
August 02, 2007 - U.S. government initiatives aimed at fostering the sharing of security intelligence throughout the federal space are helping to establish the community atmosphere and best practices necessary to help those agencies -- and private enterprises -- improve their network and applications defenses, a National Security Agency leader told attendees of the Black Hat conference on Wednesday.
Stepping to the stage to deliver a keynote presentation at the annual hacker confab in Las Vegas, Tony Stager, chief of the Vulnerability Analysis and Operations Group at the NSA, said that data-sharing efforts led by his agency and others in the federal space are maturing rapidly.
Having served a little less than 30 years as a security expert at the NSA, Stager said that federal agencies are finally succeeding in their efforts to build standards for issues such as secure configuration of Microsoft's Windows operating systems, and that those guidelines are likewise being adopted by other security initiatives and moving into the public arena.
At the heart of the progress is the notion that government entities and private institutions cannot effectively tackle security problems on their own, a deduction that seems obvious, but one that has been hard to implement on a practical level, in particular among agencies such as the NSA and the Department of Defense, which closely guard all their IT policies.
"NSA has shifted the nature of its work over the last few years; the time has come when we are all living in this same chaotic network and need to come together to solve problems of this scale," Sager said.
"In the old days, the idea was that we could simply design away the risk, but this is a much more complex world today," he said. "We've gone from protecting [assets] to protecting not only data, but all the information around that and the infrastructure that supports it; it's a much more dynamic problem, and there's no way of escaping that this is a shared problem."
As part of its effort to help foster security data sharing, NSA has moved its focus from trying to build technologies aimed at solving major security issues to attempting to influence practices across the government space that can also be adopted by private-sector firms, he said.
A major element of the vision is pushing for standards that translate security intelligence into language that any organization can interpret, said Sager. He highlighted the Common Weakness Enumeration (CWE) project -- an effort aimed at creating a common language for identifying software vulnerabilities that is backed by the Department of Homeland Security and nonprofit Mitre -- as one example of the types of standards that are delivering on the NSA's goal.
"The time has come when folks in my business are thinking about how to transfer knowledge outwardly; we don't solve these problems one organization or one vulnerability at a time, so we're thinking of ways to leverage knowledge in light of the available economies of scale," Sager said. "We must be able to deliver expertise within the context of others' problems. In that way, this has become a business of influence [for the NSA]."
In a nod to the challenges of the past, Sager said that organizations such as the NSA traditionally developed their own practices for handling issues such as secure configuration of Windows, and that nearly every other government agency would do the same.
As government bodies finally began sharing their security information and establishing more unilateral best practices, the agencies realized that they could even drive technology vendors such as Microsoft to begin shipping their products in the state that the organizations demanded -- and that other organizations, such as private enterprises, could begin to adopt the same measures and benefit from the data as well.
Despite the progress that is being made, Sager said that the ongoing process of creating unilateral security frameworks such as the CWE and many other projects backed by Mitre - a quasi-governmental body -- remains a challenge.
Organizations are sharing information, but the underlying processes that support the efforts still need further refinement, he said.
"We can't just dump our inboxes on each other. It has to be about sharing all of our different outputs in the same language, and people still don't understand that in a lot of cases," said Sager. "But through a lot of these efforts, the understanding is growing, and people are getting onto the same page, which is crucial to improving security for everyone."
Other observers agreed that the process of creating standard security language and practices across the government and private sectors are moving forward quickly.
Robert Martin, head of Mitre's CVE (Common Vulnerability Exposures) compatibility effort and a contributor to the CWE initiative, said that momentum is building behind his organization's guidelines and helping many government and private entities to better understand and share their own practices.
"With all these different pieces that are coming together, we are standardizing the basic concepts of security themselves as well as methods for reviewing and improving computing and networking systems," said Martin. "I see a future where a tapestry of tools, procedures, and processes are built over time that recognize and address the common problems that exist among all these constituencies."
Martin said that Mitre's efforts to add new security policy frameworks will continue to improve as they mature and even more parties begin to contribute their intelligence to the initiatives.
Black Hat attendees seemed encouraged by the progress being made, at least in terms of getting all the necessary parties to come together and share their tools and processes.
This level of collaboration is what has been sorely lacking in the security community in the past, observed Ray Kaplan, an independent security consultant based out of St. Paul, Minn.
"At last there's a metaview of all these shared problems. Up until recently, it seemed that this process was a confusing morass where everyone had different tools and procedures," Kaplan said. "The complimentary nature of what is going on with NSA and other agencies, and with Mitre and private involvement, should help create the common infrastructure needed to address these issues."
Edit By: Matt Hines
August 02, 2007 - U.S. government initiatives aimed at fostering the sharing of security intelligence throughout the federal space are helping to establish the community atmosphere and best practices necessary to help those agencies -- and private enterprises -- improve their network and applications defenses, a National Security Agency leader told attendees of the Black Hat conference on Wednesday.
Stepping to the stage to deliver a keynote presentation at the annual hacker confab in Las Vegas, Tony Stager, chief of the Vulnerability Analysis and Operations Group at the NSA, said that data-sharing efforts led by his agency and others in the federal space are maturing rapidly.
Having served a little less than 30 years as a security expert at the NSA, Stager said that federal agencies are finally succeeding in their efforts to build standards for issues such as secure configuration of Microsoft's Windows operating systems, and that those guidelines are likewise being adopted by other security initiatives and moving into the public arena.
At the heart of the progress is the notion that government entities and private institutions cannot effectively tackle security problems on their own, a deduction that seems obvious, but one that has been hard to implement on a practical level, in particular among agencies such as the NSA and the Department of Defense, which closely guard all their IT policies.
"NSA has shifted the nature of its work over the last few years; the time has come when we are all living in this same chaotic network and need to come together to solve problems of this scale," Sager said.
"In the old days, the idea was that we could simply design away the risk, but this is a much more complex world today," he said. "We've gone from protecting [assets] to protecting not only data, but all the information around that and the infrastructure that supports it; it's a much more dynamic problem, and there's no way of escaping that this is a shared problem."
As part of its effort to help foster security data sharing, NSA has moved its focus from trying to build technologies aimed at solving major security issues to attempting to influence practices across the government space that can also be adopted by private-sector firms, he said.
A major element of the vision is pushing for standards that translate security intelligence into language that any organization can interpret, said Sager. He highlighted the Common Weakness Enumeration (CWE) project -- an effort aimed at creating a common language for identifying software vulnerabilities that is backed by the Department of Homeland Security and nonprofit Mitre -- as one example of the types of standards that are delivering on the NSA's goal.
"The time has come when folks in my business are thinking about how to transfer knowledge outwardly; we don't solve these problems one organization or one vulnerability at a time, so we're thinking of ways to leverage knowledge in light of the available economies of scale," Sager said. "We must be able to deliver expertise within the context of others' problems. In that way, this has become a business of influence [for the NSA]."
In a nod to the challenges of the past, Sager said that organizations such as the NSA traditionally developed their own practices for handling issues such as secure configuration of Windows, and that nearly every other government agency would do the same.
As government bodies finally began sharing their security information and establishing more unilateral best practices, the agencies realized that they could even drive technology vendors such as Microsoft to begin shipping their products in the state that the organizations demanded -- and that other organizations, such as private enterprises, could begin to adopt the same measures and benefit from the data as well.
Despite the progress that is being made, Sager said that the ongoing process of creating unilateral security frameworks such as the CWE and many other projects backed by Mitre - a quasi-governmental body -- remains a challenge.
Organizations are sharing information, but the underlying processes that support the efforts still need further refinement, he said.
"We can't just dump our inboxes on each other. It has to be about sharing all of our different outputs in the same language, and people still don't understand that in a lot of cases," said Sager. "But through a lot of these efforts, the understanding is growing, and people are getting onto the same page, which is crucial to improving security for everyone."
Other observers agreed that the process of creating standard security language and practices across the government and private sectors are moving forward quickly.
Robert Martin, head of Mitre's CVE (Common Vulnerability Exposures) compatibility effort and a contributor to the CWE initiative, said that momentum is building behind his organization's guidelines and helping many government and private entities to better understand and share their own practices.
"With all these different pieces that are coming together, we are standardizing the basic concepts of security themselves as well as methods for reviewing and improving computing and networking systems," said Martin. "I see a future where a tapestry of tools, procedures, and processes are built over time that recognize and address the common problems that exist among all these constituencies."
Martin said that Mitre's efforts to add new security policy frameworks will continue to improve as they mature and even more parties begin to contribute their intelligence to the initiatives.
Black Hat attendees seemed encouraged by the progress being made, at least in terms of getting all the necessary parties to come together and share their tools and processes.
This level of collaboration is what has been sorely lacking in the security community in the past, observed Ray Kaplan, an independent security consultant based out of St. Paul, Minn.
"At last there's a metaview of all these shared problems. Up until recently, it seemed that this process was a confusing morass where everyone had different tools and procedures," Kaplan said. "The complimentary nature of what is going on with NSA and other agencies, and with Mitre and private involvement, should help create the common infrastructure needed to address these issues."
Edit By: Matt Hines
[Virus Security]Black Hat: Estonia attacks an example of online rioting, says researcher
There are lessons for companies that must deal with large-scale Web attacks
August 02, 2007 - LAS VEGAS - A series of online attacks that seriously disrupted Web sites belonging to several banking and government organizations in Estonia earlier this year may have been perpetrated by a loosely organized, politically motivated online mob, a security researcher suggested today at the Black Hat 2007 conference.
The attacks hold several lessons about how large-scale Internet attacks can unfold and the responses that may be needed to deal with them, said Gadi Evron, security evangelist for Israel-based Beyond Security. "The use of the Internet to create an online mob has proven itself and will likely receive more attention in the future," following the Estonia attacks, said Evron, who wrote a post-mortem report on the incident for the Estonian CERT.
The widely reported attacks in Estonia started in late April and crippled Web sites belonging to the Estonian government -- including that of the nation's prime minister as well as several banks and smaller sites run by schools. The online attacks are believed to have been triggered by the Estonian government's decision to relocate a Soviet-era war memorial in Tallin called the Bronze Soldier.
The decision sparked more than two days of rioting in Tallin by ethnic Russians as well as a siege of the Estonian embassy in Moscow. It also appears to have sparked an Internet riot aimed at the country's online infrastructure, Evron said.
Initial media reports suggested that the denial-of-service (DOS) attacks may have been organized by the Russian government in retaliation for Estonia's decision to move the statue. The reality, however, is that the attacks were carried on by an unknown number of Russian individuals with active support from security-savvy people in the Russian blogosphere. Evron said.
Many Russian language blogs offered simple and detailed instructions to their readers on how to overload Estonian Web sites using "ping" commands, for instance, Evron said. The bloggers also kept updating their advice as Estonian incident responders started defending against the initial attacks.
The attacks started with pings and quickly scaled up to more sophisticated attacks, including those enabled via botnets from outside Estonia. One attack was launched by a specially crafted botnet with targets hard-coded in their source, Evron said. Some bloggers attempted to collect money to hire botnets to launch attacks against targets in Estonia, Evron said.
The timing of the attacks, their scope and the sudden availability of botnets to aim at Estonian targets suggest that some level of organization was involved, Evron said. But there is no evidence to explain who was responsible.
Overall, none of the attack methods were new or sophisticated, Evron said. Neither were they particularly large as far as DOS attacks go, he said. But they were enough to seriously disrupt several services in what is a very Internet-dependent country. For instance, because bank sites were crippled, many citizens were unable to conduct ordinary transactions such as buying gas and groceries.
The attacks highlight several issues -- chief among them the importance of incident response, Evron said. When the attacks started, the Estonian responders first focused on the targets rather than sources. Filtering technology was used to throttle back on traffic aimed at target systems, which, at its peak, reached between 100 to 1,000 times the normal amount of traffic.
Quick decisions were made on which systems to protect first and all connections to those systems from outside the country were blocked. Efforts were also made to lure attackers to less critical systems and draw their attention way from the more important ones, Evron said.
The Estonian incident also showed how -- at least in that country's case -- "critical infrastructure" proved to be banking and private-sector companies, ISPs and media Web sites, not Estonia's transportation or energy sectors, Evron said.
Edit By: Jaikumar Vijayan
August 02, 2007 - LAS VEGAS - A series of online attacks that seriously disrupted Web sites belonging to several banking and government organizations in Estonia earlier this year may have been perpetrated by a loosely organized, politically motivated online mob, a security researcher suggested today at the Black Hat 2007 conference.
The attacks hold several lessons about how large-scale Internet attacks can unfold and the responses that may be needed to deal with them, said Gadi Evron, security evangelist for Israel-based Beyond Security. "The use of the Internet to create an online mob has proven itself and will likely receive more attention in the future," following the Estonia attacks, said Evron, who wrote a post-mortem report on the incident for the Estonian CERT.
The widely reported attacks in Estonia started in late April and crippled Web sites belonging to the Estonian government -- including that of the nation's prime minister as well as several banks and smaller sites run by schools. The online attacks are believed to have been triggered by the Estonian government's decision to relocate a Soviet-era war memorial in Tallin called the Bronze Soldier.
The decision sparked more than two days of rioting in Tallin by ethnic Russians as well as a siege of the Estonian embassy in Moscow. It also appears to have sparked an Internet riot aimed at the country's online infrastructure, Evron said.
Initial media reports suggested that the denial-of-service (DOS) attacks may have been organized by the Russian government in retaliation for Estonia's decision to move the statue. The reality, however, is that the attacks were carried on by an unknown number of Russian individuals with active support from security-savvy people in the Russian blogosphere. Evron said.
Many Russian language blogs offered simple and detailed instructions to their readers on how to overload Estonian Web sites using "ping" commands, for instance, Evron said. The bloggers also kept updating their advice as Estonian incident responders started defending against the initial attacks.
The attacks started with pings and quickly scaled up to more sophisticated attacks, including those enabled via botnets from outside Estonia. One attack was launched by a specially crafted botnet with targets hard-coded in their source, Evron said. Some bloggers attempted to collect money to hire botnets to launch attacks against targets in Estonia, Evron said.
The timing of the attacks, their scope and the sudden availability of botnets to aim at Estonian targets suggest that some level of organization was involved, Evron said. But there is no evidence to explain who was responsible.
Overall, none of the attack methods were new or sophisticated, Evron said. Neither were they particularly large as far as DOS attacks go, he said. But they were enough to seriously disrupt several services in what is a very Internet-dependent country. For instance, because bank sites were crippled, many citizens were unable to conduct ordinary transactions such as buying gas and groceries.
The attacks highlight several issues -- chief among them the importance of incident response, Evron said. When the attacks started, the Estonian responders first focused on the targets rather than sources. Filtering technology was used to throttle back on traffic aimed at target systems, which, at its peak, reached between 100 to 1,000 times the normal amount of traffic.
Quick decisions were made on which systems to protect first and all connections to those systems from outside the country were blocked. Efforts were also made to lure attackers to less critical systems and draw their attention way from the more important ones, Evron said.
The Estonian incident also showed how -- at least in that country's case -- "critical infrastructure" proved to be banking and private-sector companies, ISPs and media Web sites, not Estonia's transportation or energy sectors, Evron said.
Edit By: Jaikumar Vijayan
[PC Technology]Painless Backups to USB Drives
Low-cost USB drives make it easier than ever to back up your data and take it with you.
USB thumb drives have declined in price recently, making now a great time to start using them for your daily backups. Unfortunately, not all USB drives support automatic backups.
Look for devices labeled 'USB Smart Drives'; these are enabled with U3 functions or with Lexar's PowerToGo, either of which offers a self-contained operating system that lets you access the files and programs on the drive from any USB-equipped PC, without leaving remnants of your session on the system when you remove the drive. Currently, 4GB U3 drives such as SanDisk's Cruzer cost about $55, and the Lexar 4GB JumpDrive Lightning costs from $70 to $100 online. Most of these drives come with trial or full versions of backup and syncing software, but you can download a free copy of SanDisk's CruzerSync for U3 follow the download instructions) or try Migo Software's $30 Migo Personal for U3 (free trial).
Each works the same way: You select the folders and files you want to backup by clicking the box next to the folder in the outline tree. Both CruzerSync and Migo back up and sync your Microsoft Outlook and Outlook Express messages and address books, so you can access them on any computer. The backup and restore apps create mirror copies on the USB drive. You can set any of these programs to back up and/or sync specified folders automatically when you insert the drive into a USB port. Some even do automatic backups at set times if the drive remains in the USB port.
Edit By: Michael S. Lasky
USB thumb drives have declined in price recently, making now a great time to start using them for your daily backups. Unfortunately, not all USB drives support automatic backups.
Look for devices labeled 'USB Smart Drives'; these are enabled with U3 functions or with Lexar's PowerToGo, either of which offers a self-contained operating system that lets you access the files and programs on the drive from any USB-equipped PC, without leaving remnants of your session on the system when you remove the drive. Currently, 4GB U3 drives such as SanDisk's Cruzer cost about $55, and the Lexar 4GB JumpDrive Lightning costs from $70 to $100 online. Most of these drives come with trial or full versions of backup and syncing software, but you can download a free copy of SanDisk's CruzerSync for U3 follow the download instructions) or try Migo Software's $30 Migo Personal for U3 (free trial).
Each works the same way: You select the folders and files you want to backup by clicking the box next to the folder in the outline tree. Both CruzerSync and Migo back up and sync your Microsoft Outlook and Outlook Express messages and address books, so you can access them on any computer. The backup and restore apps create mirror copies on the USB drive. You can set any of these programs to back up and/or sync specified folders automatically when you insert the drive into a USB port. Some even do automatic backups at set times if the drive remains in the USB port.
Edit By: Michael S. Lasky
[PC Security]Warning: Laser Printers Could Be a Health Hazard
Your laser printer could be spewing toner over your home or office, according to an Australian air quality researcher.
Some home and office laser printers pose serious health risks and may spew out as much particulate matter as a cigarette smoker inhales, an Australian air quality researcher said Tuesday.
The study, appeared today in the online edition of the American Chemical Society's Environmental Science & Technology (ES&T) journal, measured particulate output of 62 laser printers, including models from name brands such as Canon, Hewlett-Packard and Ricoh. Particle emissions, believed to be toner -- the finely-ground powder used to form images and characters on paper -- were measured in an open office floor plan, then ranked.
Specific printer results are listed in the published study.
What They Found
Lidia Morawska and colleagues at the Queensland University of Technology, classified 17 of the 62 printers, or 27 percent, as "high particle emitters"; one of the 17 pumped out particulates at a rate comparable with emissions from cigarette smoking, the study said.
Morawska called the emissions "a significant health threat" because of the particles' small size, which makes them easy to inhale and easily lodged in the deepest and smallest passageways of the lungs. The effects, she said, can range from simple irritation to much more serious illnesses, including cardiovascular problems or cancer. "Even very small concentrations can be related to health hazards," said Morawska. "Where the concentrations are significantly elevated means there is potentially a considerable hazard."
Two printers released medium levels of particulates, six issued low levels, and 37 -- or about 60 percent of those tested -- released no particles at all. HP, which is one of the world's leading printer sellers, dominated both the list of high-level emitting and non-emitting printers.
HP's Response
When contacted by PC World, the company issued this statement: "HP is currently reviewing the Queensland University of Technology research on particle emission characteristics of office printers. Vigorous tests under standardized operating conditions are an integral part of HP's research and development and its strict quality control procedures."
"As part of these quality controls, HP assesses its LaserJet printing systems, original HP print cartridges and papers for dust release and possible material emissions to ensure compliance with applicable international health and safety requirements."
More Study Results
The research also found that office particulate levels increased fivefold during work hours because of laser printers. Generally, more particles were emitted when the printer was using a new toner cartridge, and when printing graphics or photographs that require larger amounts of toner than, say, text.
Morawska recommended that people make sure rooms at work and home with laser printers are well ventilated.
Edit By: Gregg Keizer
Some home and office laser printers pose serious health risks and may spew out as much particulate matter as a cigarette smoker inhales, an Australian air quality researcher said Tuesday.
The study, appeared today in the online edition of the American Chemical Society's Environmental Science & Technology (ES&T) journal, measured particulate output of 62 laser printers, including models from name brands such as Canon, Hewlett-Packard and Ricoh. Particle emissions, believed to be toner -- the finely-ground powder used to form images and characters on paper -- were measured in an open office floor plan, then ranked.
Specific printer results are listed in the published study.
What They Found
Lidia Morawska and colleagues at the Queensland University of Technology, classified 17 of the 62 printers, or 27 percent, as "high particle emitters"; one of the 17 pumped out particulates at a rate comparable with emissions from cigarette smoking, the study said.
Morawska called the emissions "a significant health threat" because of the particles' small size, which makes them easy to inhale and easily lodged in the deepest and smallest passageways of the lungs. The effects, she said, can range from simple irritation to much more serious illnesses, including cardiovascular problems or cancer. "Even very small concentrations can be related to health hazards," said Morawska. "Where the concentrations are significantly elevated means there is potentially a considerable hazard."
Two printers released medium levels of particulates, six issued low levels, and 37 -- or about 60 percent of those tested -- released no particles at all. HP, which is one of the world's leading printer sellers, dominated both the list of high-level emitting and non-emitting printers.
HP's Response
When contacted by PC World, the company issued this statement: "HP is currently reviewing the Queensland University of Technology research on particle emission characteristics of office printers. Vigorous tests under standardized operating conditions are an integral part of HP's research and development and its strict quality control procedures."
"As part of these quality controls, HP assesses its LaserJet printing systems, original HP print cartridges and papers for dust release and possible material emissions to ensure compliance with applicable international health and safety requirements."
More Study Results
The research also found that office particulate levels increased fivefold during work hours because of laser printers. Generally, more particles were emitted when the printer was using a new toner cartridge, and when printing graphics or photographs that require larger amounts of toner than, say, text.
Morawska recommended that people make sure rooms at work and home with laser printers are well ventilated.
Edit By: Gregg Keizer
Subscribe to:
Comments (Atom)