Friday, November 27, 2015

Microsoft's November Windows 10 update screwed up some users' privacy settings

The company has released a fix, and plans to put things back for affected users

People who updated to the latest Windows 10 update may want to double-check their settings. Microsoft revealed Tuesday that it took the previous update (which was released on November 12) down from the Internet the day before because of a problem that reset some users' privacy settings when installed.

The bug reset settings on affected devices to make it easier for advertisers to track users across applications, and allow devices to share users' information with wireless gizmos like bluetooth beacons that don't explicitly pair with a PC, tablet or phone.

Microsoft released a fix on Tuesday, so anyone installing the update now shouldn't be affected by the bug. What's more, the company said in an emailed statement that those people who had their settings changed will have them restored to the correct configuration over the coming days. However, Microsoft won't say how it plans to do that yet.

The company said in its statement that the problem affected "an extremely small number of people who had already installed Windows 10 and applied the November update." It's not clear what triggered the bug, however.

The good news in all this is that Microsoft has fixed the problem after it became apparent. The bad news is that the company released an update that changed settings users rely on to maintain their privacy.

All of this comes at a time when users have heightened concerns about what data Windows 10 collects on users and shares with Microsoft. The company offers settings to stop that collection (except for telemetry data that it thinks isn't a privacy issue), but all those settings are for naught if bugs render them useless.

Friday, November 20, 2015

74-338 Lync 2013 Depth Support Engineer


QUESTION 1
You work for a company named ABC.com. Your role of Lync Administrator includes the
management of the Microsoft Lync Server 2013 infrastructure.
Two Windows Server 2012 servers named ABC-DB01 and ABC-DB02 run SQL Server 2012.
ABC-DB01 and ABC-DB02 host a mirrored database for the Lync Server Central Management
Store (CMS). ABC-DB01 currently has the principle database and ABC-DB02 currently has the
mirror database. The mirrored database does not use a witness instance.
You need to manually failover the mirrored database to enable you to perform maintenance on
ABC-DB01.
Which of the following Windows PowerShell cmdlets should you run?

A. Invoke-CsPooIFailover
B. Invoke-CsManagementStoreReplication
C. Invoke-CsBackupServiceSync
D. Invoke-CSManagementServerFailover

Answer: D

Explanation:


QUESTION 2
You work for a company named ABC.com. The company has a Microsoft Lync Server 2013
infrastructure that includes two Lync Server pools. Your role of Lync Administrator includes the
management of the Microsoft Lync Server 2013 infrastructure.
An Edge server named ABC-Edge1 is configured to use a pool named ABC-LyncPool1.ABC.com
as its next hop. You plan to failover to a second pool named ABC-LyncPool2.ABC.com. Before
failing over the pool, you need to reconfigure the next hop for ABC-Edge1 to be ABCLyncPool2.
ABC.com.
Which of the following Windows PowerShell cmdlets should you run?

A. Set-CsEdgeServer
B. Set- CsAVEdgeConfiguration
C. New-CsEdgeAllowList
D. Set-CsAccessEdgeConfiguration
E. Move-CsApplicationEndpoint

Answer: A

Explanation:


QUESTION 3
You work for a company named ABC.com. The company has two Active Directory sites in a
single Active Directory Domain Services domain named ABC.com. Your role of Lync
Administrator includes the management of the Microsoft Lync Server 2013 infrastructure.
The Lync infrastructure consists of a single pool named ABC-LyncPool1.ABC.com.
You have been asked to design a disaster recovery (DR) plan in the event of a failure of ABCLyncPool1.
ABC.com. Part of the DR plan would be to configure a backup pool.
Which three of the following Windows PowerShell cmdlets would you need to run to recover the
CMS (Central Management Store) and the Lync user accounts? (Choose three)

A. Set-CsManagementServer
B. Install-CsDatabase
C. Set-CsLocationPolicy
D. Move-CsManagementServer
E. Invoke-CSManagementServerFailover
F. Invoke-CsPoolFailover

Answer: B,D,F

Explanation:


QUESTION 4
You work for a company named ABC.com. The company has a single Active Directory Domain
Services domain named ABC.com. The company has a datacenter located in New York.
The New York datacenter hosts two Microsoft Lync Server 2013 pools named ABCLyncPool1.
ABC.com and ABC-LyncPool2.ABC.com. ABC-LyncPool1.ABC.com hosts the CMS
(Central Management Store). All of the company’s 70,000 users are enabled for Lync. Your role
of Lync Administrator includes the management of the Microsoft Lync Server 2013 infrastructure.
The servers in ABC-LyncPool1.ABC.com suffer irreparable hardware failure. You need to recover
the Lync environment by failing over ABC-LyncPool1.ABC.com. All users will be hosted
permanently on ABC-LyncPool2.ABC.com.
Which of the following Windows PowerShell cmdlets should you run? (Choose all that apply)

A. Invoke-CSManagementServerFailover
B. Invoke-CsPoolFailover
C. Invoke-CsManagementStoreReplication
D. Invoke-CsPoolFailover
E. Move-CsManagementServer
F. Install-CsDatabase

Answer: D,E,F

Explanation:


QUESTION 5
You work for a company named ABC.com. Your role of Lync Administrator includes the
management of the Microsoft Lync Server 2013 infrastructure.
You receive reports from users that they are sometimes unable to make outbound calls. You
discover that the failures are caused by there being no available trunks.
To help troubleshoot the issue, you plan to run performance monitor counters to monitor the total
number of calls and the total number of inbound calls to determine trunk usage.
Against which server should you run the performance monitor counters?

A. Edge Server
B. Front End Server
C. Database Server
D. Mediation Server

Answer: D

Explanation:

Friday, October 30, 2015

644-906 Implementing and Maintaining Cisco Technologies Using IOS XR - (IMTXR)

QUESTION 3
What is the maximum long-term normal operating temperature of the Cisco CRS-1, ASR 9000
Series Routers, and XR 12000 Series Routers?

A. 40C (104F)
B. 50C (122F)
C. 55C (131F)
D. 65C (149F)

Answer: A

Explanation:


QUESTION 4
The Cisco CRS 16-Slot Line Card Chassis Site Planning Guide suggests having 48 inches of
clearance behind the chassis. What would definitely happen to the system if there were only 28
inches of clearance behind the Cisco CRS 16-Slot Line Card Chassis?

A. The system would overheat due to inadequate airflow.
B. The fabric card could not be exchanged if one failed.
C. The modular services card (MSC) could not be exchanged if one failed.
D. The fan tray could not be exchanged if one failed.

Answer: D

Explanation:


QUESTION 5
How many planes are there in the Cisco CRS-3 switch fabric?

A. 1
B. 3
C. 7
D. 8

Answer: D

Explanation:


QUESTION 6
What is the cell size of the cells that traverse the switch fabric on the Cisco CRS-3?

A. 128 bytes
B. 136 bytes
C. 144 bytes
D. 200 bytes
E. 288 bytes

Answer: B

Explanation:


QUESTION 7
Where are client interfaces terminated on the Cisco CRS-3?

A. the modular services card
B. the physical layer interface module(s)
C. the switch fabric interface terminator
D. the Service Processor 40
E. the Service Processor 140

Answer: B

Explanation:


QUESTION 8
In order to determine the hardware and firmware revision of a linecard, what is the correct
command that should be invoked?

A. RP/0/RP0/CPU0:CRS-MC#show version
B. RP/0/RP0/CPU0:CRS-MC#show platform
C. RP/0/RP0/CPU0:CRS-MC(admin)#show platform
D. RP/0/RP0/CPU0:CRS-MC#show diagnostic summary
E. RP/0/RP0/CPU0:CRS-MC(admin)#show diag details

Answer: E

Explanation:


QUESTION 9
In which mode can you check the power usage of a chassis?

A. in EXEC mode
B. in admin mode
C. in both EXEC and admin mode
D. in ROMMON mode
E. in environmental mode

Answer: B

Explanation:

Wednesday, September 30, 2015

As containers take off, so do security concerns

Containers offer a quick and easy way to package up applications but security is becoming a real concern

Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.

According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.

But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.
MORE ON CSO: The things end users do that drive security teams crazy

Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.

The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that -- but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.

According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.
Isolated, but not isolated enough

Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.

"Your application is really more secure when it's running inside a Docker container," said Nathan McCauley, director of security at Docker, which currently dominates the container market.
MORE ON NETWORK WORLD: 12 Free Cloud Storage options

According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.

Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.

Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.

"You can't attack something else because you don't even know it exists," he said. "You can even get a handle on another process on the machine, because you don't even know it's there."
Resources

White Paper
Buying into Mobile Security
White Paper
How secure is your email? Prevent Phishing & Protect Your Customers Post Data Breach

See All

However, container isolation doesn't go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.

"Containers do not make a promise of providing resilient, multi-tenant isolation," he said. "It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine."

If a company isn't looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.

This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.

"We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources," said Tom Fountain, the company's CTO. "We think it's simpler at the administrative level."

Plus, this gives the application a second layer of security, he said.

"The ability to configure a particular virtual machine will provide a layer of insulation and security," he said. "Then when we're deployed inside that virtual machine then there's one layer of security that's put around the container, and then within our own container we have additional layers of security as well."

But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.

Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.

And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.

"Every container is sharing the same kernel," he said. "So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I'm confident we will solve it at some point."

Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application -- not the principles they would apply to a virtual machine.

"Some people think that containers are more secure than they are," he said.
Vulnerable images

McCauley said that Docker is also working to address another security issue related to containers -- that of untrusted content.

According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.

Outside the official repositories, that number jumps to about 40 percent.

Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.

"In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact," wrote founder Yoshio Turner in the report.

In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of

"It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms," Docker's McCauley said. "It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from."

Red Hat, for example, which has its own container repository, signs its containers, said Red Hat's Bressers.

"We say, this container came from Red Hat, we know what's in it, and it's been updated appropriately," he said. "People think they can just download random containers off the Internet and run them. That's not smart. If you're running untrusted containers, you can get yourself in trouble. And even if it's a trusted container, make sure you have security updates installed."

According to Docker's McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.

Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.

However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.

"Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers -- all these are going to be required," he said.

In addition, container technology makes it difficult to see what's going on, experts say, and legacy systems can't cut it.

"Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them," said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.

Another new vendor in this space is Twistlock, which came out of stealth mode in May.

"Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen," said Chenxi Wang, the company's chief strategy officer.

Say, for example, you want to run anti-virus software. According to Wang, it won't run inside the container itself, and if it's running outside the container, on the virtual machine, it can't see into the container.

Twistlock provides tools that can add security at multiple points. It can scan a company's repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.

"For example, if the application inside the container is allowed to run as root, we can say that it's a violation of policy and stop it from running," she said.

Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.

And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.

Market outlook

According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they're going to use them.

"Today, it's not really a big market," he said. "It's still really early in the game. Security is something you need once you start to put containers into operations."

That will change once containers get more widely deployed.

"I wouldn't be surprised if the big guys eventually got into this marketplace," he said.

More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.

But it's hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.

"Docker has not yet found a way to monetize their software," he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.

With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.

"When you buy from startups you always have this business risk, that a startup will change its identity on the way," Firtsch said.

Tuesday, September 22, 2015

CompTIA Server+ Certification Training 2015

CompTIA's Server+ 2015 is a vendor-neutral certification that deals with every aspect of the "care and feeding" of server computers. While nearly any computer can be used as a server in a small networking environment, many organizations require dedicated network servers built to high performance specifications. These powerful machines are called upon to handle hundreds (if not thousands) of user accounts, and all of the network activity and requests generated by these users. Additionally, there's a variety of specialized servers (e.g. database servers, file and print servers, web servers, etc.) that can be deployed to perform critical roles in organizations.

The Server+ cert is aimed at technicians (ideally with a CompTIA A+ cert) who have 18 to 24 months of professional experience working with server hardware and software. The Server+ cert was developed in consultation with several industry partners, and is recommended or required for server technicians who work for Dell, HP, IBM, Intel, Lenovo and Xerox. First released in 2001, the Server+ exam was updated in 2005, and again in 2009.

Server+ training

There are a number of different training options available for CompTIA's Server+ 2015. For students on a budget, the most affordable option involves the use of printed self-study manuals. These self-paced books are a good option for candidates who have access to a test lab outfitted with computer server hardware and software, and who feel confident in their ability to teach themselves material from texts. Self-study manuals can also give candidates the most flexibility when scheduling training sessions for themselves.

Server+ self-study manuals are available from several vendors. Students should shop online in order to find the best pricing on these materials.

Self-study
Candidates who prefer more dynamic training should look at self-paced video courseware. This form of training uses video lessons on optical disks, or may be offered through an online streaming video subscription service. Some of the vendors who create training manuals also create video courseware, and will often bundle the two products together. Self-paced video courseware can be more engaging than printed materials alone, while still offering the same flexibility when it comes to scheduling lessons.

Instructor-led training for Server+ is the most expensive option available, but offers the most beneficial learning experience to students who need interaction with a live instructor in order to learn new material. Instructor-led training can be purchased as virtual classroom courses delivered over the Internet, or traditional classroom courses held at a technical school.

Online courses
Virtual classroom courses use special client software or a web browser plug-in to simultaneously connect several students to an online classroom, which is managed by a live instructor. Virtual classroom courses are a good option for students who live a great distance from a technical school, or who have any conditions that make it difficult for them to travel to a physical classroom. These classes take place in real-time, so candidates must be able to work them into their existing schedules.
Traditional classroom

Finally, there are traditional classroom courses. For some, this training option offers the best learning experience: a live instructor, other students to collaborate with, and (by most schools) access to all of the relevant hardware and software labs necessary to master Server+ course content.

Here are the most common subjects a Server+ student can expect to encounter, no matter which training option they select:

Identifying and configuring server hardware components
Installing and configuring a network operating system
Server security fundamentals
Server-based storage technologies
Disaster recovery and contingency planning
Server troubleshooting tools and techniques

Server+ certification exam
There are no prerequisites for taking the Server+ exam, although CompTIA recommends that candidates should have their A+ certification, and somewhere between 18 and 24 months experience working with server computer hardware and software. The Server+ exam can be booked and taken at any authorized CompTIA exam center. As of this writing, the current Server+ exam code is SK0-003. The exam is available in English, Chinese, German and Japanese.

The Server+ exam is made up of 100 multiple-choice questions. Candidates have 90 minutes to complete the exam. The passing score for the exam is 750 on a scale of 100-900, and candidates are informed immediately upon exam completion if they have passed or not.

Here's a list of the Server+ exam knowledge domains, with an estimate of how much exam content is dedicated to each:

System Hardware (21%)
Software (19%)
Storage (14%)
IT Environment (11%)
Disaster Recovery (11%)
Troubleshooting (24%)

Server+ in the workplace
The Server+ cert is valid for three years once it has been awarded by CompTIA. Candidates can renew the Server+ by earning a set total of CompTIA Continuing Education Units (CEUs) during the three-year certified period. CompTIA CEUs are attained by earning additional CompTIA certs, or can be gained by participating in certain approved industry activities. For more information about the CompTIA Continuing Education Program, visit the CompTIA Certification website.

If the Server+ is allowed to expire, the exam must be passed again in order to re-certify.

Some of the job roles associated with the Server+ certification include the following:
Authorized Server Technician
Server Sales Specialist
Network Server Support Specialist
Application Server Specialist
Web Server Specialist


Tuesday, September 1, 2015

VMware rounds out data center virtualization stack

VMware has added more components to its software-defined data center, updating vCloud, NSX and its OpenStack distribution

VMware has updated its stack of data center virtualization software, rounding out capabilities that allow an organization to run an entire data center operation and related cloud services as a single unified entity.

Among the new additions are components to the vCloud Air suite of software for running cloud services. The company has expanded its network virtualization software to provide more ways of moving a workload across a data center. And it has also released a new version of its OpenStack distribution for running cloud workloads.

VMware's vCloud Air is the company's answer to the success of cloud service providers such as Amazon Web Services. The software lets organizations run their own IT operations as a set of cloud services. It also provides a unified base for multiple cloud service providers to offer vCloud services that interoperate with each other as well as with customer's internal vCloud deployments.

The VMware vCloud Air now has a number of new options for storing data, such as vCloud Air Object Storage for storing unstructured data. It features built-in redundancy, eliminating the need to make backups. The data can be accessed from anywhere in the globe as well.

The company also has a new database-as-a-service, called vCloud Air SQL, which provides the ability to store relational data on a pay-as-you-go model. Initially, vCloud Air SQL will be compatible with Microsoft SQL Server, but plans are to make it compatible with other relational databases.

The company has updated its VMware vCloud Air Disaster Recovery Services, which provide a way to ensure that operations continue even if the enterprise's data center goes offline. It now has a new management console for testing, executing and orchestrating disaster recovery plans.

VMware also updated its software for virtualizing network operations. VMware NSX 6.2 allows a virtual machine to be copied across a single data center, or even two different data centers, while retaining its networking and security settings.

NSX 6.2 now can recognize switches through the Open vSwitch Database (OVSDB) protocol, providing new ways for the users of such switches to segment their physical servers into smaller working groups. VMware NSX 6.2 also has a new central command line interface and a set of troubleshooting capabilities, called TraceFlow.

VMware says NSX is now being used by more than 700 customers, with over 100 cases being used in production deployments.

VMware vRealize Operations, which provides a single interface to watch the operational health of applications running on VMware, has been updated to include capabilities to find the best set of resources within a data center to place a workload. It also does rebalancing to move workloads around for most efficient use of data center resources.

Also on the management side, the company has updated its logging software, which is now capable of ingesting 15,000 messages per second. The software also now offers new ways to chart and search through operational data.

The newly released VMware Integrated OpenStack 2 is based on the latest release of the open source OpenStack software, which was codenamed Kilo and released in April. The new release has a load-balancing feature as well as the ability to automatically scale up workloads should they require more resources.


Monday, August 17, 2015

Top 10 technology schools

Interested in going to one of the best colleges or universities to study technology? Here are the top 10 schools known for their computer science and engineering programs.

Top technology schools
Every year, Money releases its rankings of every college and university in the U.S., and not surprisingly, a number of those top schools are leaders in the tech space. Here are the top 10 technology schools, according to Money's most recent survey of the best colleges in America.

Stanford University
First on the list for not only technology colleges, but all colleges, Stanford University has an impressive 96 percent graduation rate. The average price for a degree is $178,731 and students earn, on average, $64,400 per year upon graduation. Stanford's global engineering program allows its 4,850 students to travel around the globe while studying engineering. There are nine departments in the engineering program: aeronautics and astronautics, bioengineering, chemical engineering, civil and environmental engineering, computer science, electrical engineering, management science and engineering, materials science and engineering, and mechanical engineering.

Massachusetts Institute of Technology
The Massachusetts Institute of Technology, located in Cambridge, Mass., is the second best technology school in the country, with a 93 percent graduation rate. The average net price of a degree comes in at a $166,855, but students can expect an average starting salary of $72,500 per year after graduating. As one of the top engineering schools, it's ranked number 1 for chemical, aerospace/aeronautical, computer and electrical engineering. The top employers for the 57 percent of graduates that enter the workforce immediately include companies like Google, Amazon, Goldman Sachs and ExxonMobil. Another 32 percent of students, however, go on to pursue a higher degree.

California Institute of Technology
Located in Pasadena, Calif, the California Institute of Technology has a graduation rate of 93 percent. The average cost of a degree is $186,122, and students earn an average starting salary of $72,300. CalTech, as it's often called, has departments in aerospace, applied physics and materials studies, computing and mathematical sciences, electrical engineering, environmental science and engineering, mechanical and civil engineering, and medical engineering. The prestigious college is also home to 31 recipients of the Nobel Peace Prize.

Harvey Mudd College
Harvey Mudd College in Claremont, Calif., has a strong technology program, putting it at number 4 on the list of top technology schools. The cost of tuition is also one of the highest on this list, at $196,551 for a degree. Graduates of Harvey Mudd earn an average of $76,400 early on in their careers and the graduation rate is 91 percent. The engineering program at Harvey Mudd College focuses on helping students apply their skills to real world situations. Students can get professional experience and help solve design problems outside of the classroom through an engineering clinic.

Harvard University
Harvard University, located in Cambridge, Mass., technically ties with Harvey Mudd for top technology schools, and top overall colleges. The graduation rate is 97 percent and the average price of a degree is $187, 763 while graduates earn an average annual salary of $60,000 when starting their careers. In the Jon A. Paulson School of Engineering and Applied Sciences at Harvard, which goes back as far as 1847, undergraduate students can study applied mathematics, biomedical engineering, computer science, electrical engineering, engineering sciences and mechanical engineering.

University of California at Berkeley
The University of California at Berkeley has a graduation rate of 91 percent, and students can get a degree for around $133,549. After graduation, the average salary for students starting out their careers is $58,300 per year. The electrical engineering and computer science division of the University of California at Berkeley has around 2,000 undergraduate students and is the largest department within the university.

University of Pennsylvania
University of Pennsylvania, located in Philadelphia, Penn., has a graduation rate of 96 percent and the average cost of a degree is $194,148. Students graduating from Penn and starting out their careers earn an average annual starting salary of $59,200. The Penn engineering department focuses on computer and information science. Students can study computer science, computer engineering, digital media design, networked and social systems engineering, computer science, computational biology as well as computer and cognitive science.

Rice University
Located in Houston, Rice University has a graduation rate of 91 percent and the average cost of a degree is $157,824. Upon graduation, the average starting salary for students comes in at $61,200 per year. Rice University has a Department of Computer Science where students can work in faculty research programs and describes the perfect computer science student as a "mathematician seeking adventure," a quote from system architect Bob Barton. In the electrical and computer engineering department, students can prepare for a career in oil and gas, wearables, entertainment, renewable energy, gaming, healthcare, space industry, security and aviation.

Brigham Young University-Provo
Brigham Young University-Provo, located in Provo, Utah, has a graduation rate of 78 percent, but students won't have as many loans as other colleges on this list. The average price of a degree is a moderate $80,988 and the average starting salary for graduates is around $51,600 per year. Brigham Young University-Provo offers degrees in electrical engineering, computer engineering and computer science. With a wide array of programs to choose from in each degree, Brigham Young University-Provo boasts a rigorous course load with an emphasis on gaining practical skills for the workforce.

Texas A&M University
College Station, Texas, is home to Texas A&M University where 79 percent of students graduate and the average cost of a degree is $84,732. Students can expect to earn an average starting salary of $54,000 per year after graduation. The Texas A&M computer science and engineering programs boasts an "open, accepting, and compassionate community that encourages the exploration of ideas." Students should expect to leave the program prepared to help solve real-world challenges in the technology industry through applied research.


for more info on CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com

Friday, July 17, 2015

Google reports strong earnings, stock jumps 7%

Revenue growth, however, has slowed in recent years

Google's stock jumped more than 7 percent in the after-market hours on Thursday, after the company reported strong earnings results for the second quarter.

Total income for the period ended June 30 was $3.93 billion, up 17 percent from $3.35 billion in the second quarter of 2014, Google announced Thursday. Excluding certain expenses, Google reported earnings of $6.99, beating analysts' estimates of $6.71, as polled by the Thomson Financial Network.

The company's stock was trading at around $620 after Google reported its earnings at the end of trading, up from closing at $579.

Still, Google's growth has been slowing.

Advances in the Internet giant's crucial advertising sales have taken a hit the last few quarters. Revenue is still growing, but at a slower rate than in years past, as the company has made new investments in ambitious "moon shot" projects like self-driving cars and Internet balloons in the stratosphere.

The company's sales for the second quarter were $17.73 billion, up 11 percent, and coming in just shy of analysts' estimates of $17.75 billion.

But the 11 percent growth rate is the smallest revenue increase reported by the company since 2012.

The company reported mixed results in its ads business. Its paid clicks grew by 18 percent, but the cost-per-click paid by advertisers fell by 11 percent.

The company's operating expenses, meanwhile, grew by 13 percent, to $6.32 billion.

One concern among investors is that Google is struggling to grow its ad revenue on mobile devices. In comparison to the desktop, ads in mobile search results are smaller, and can yield fewer interactions from users, driving down their price.

Google has tried to attract more users to its ads on mobile by adding more information and functionality to them, like product ratings and store inventory information. Just this week, the company said it was rolling out a new way to let users make purchases directly from the ads in mobile search results.

Google is also competing with a rising number of apps made by other developers built around specific types of searches or online activities. In April, Google made a change to its search algorithm that prioritized sites that had been optimized for mobile. By prioritizing higher quality sites, the effect, dubbed "Mobilegeddon," was aimed at getting more people to use Google search on mobile.

Google doesn't break out it's desktop versus mobile advertising sales. But Google might be making new strides in mobile. In its announcement Thursday, CFO Ruth Porat said mobile "stood out" in the context of the company's core search results.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Tuesday, June 23, 2015

Microsoft muddies waters about free copy of Windows 10 to beta testers

Revised statement leads to more questions: Will preview participants get free copy, no matter how they installed it, or not?

Microsoft on Friday said Windows 10 beta testers will receive a free copy of the operating system's stable build next month then almost immediately tweaked its statement, again muddying the waters.

Gabriel Aul, the engineering general manager for Microsoft's operating system group, got the ball rolling Friday in a blog where he also pointed out several changes to the Windows Insider program, Microsoft's name for its Windows 10 preview regimen. The most newsworthy of Aul's statements was that Insider participants would receive Windows 10's final code, even if they didn't install the preview on a Windows 7 or 8.1 PC eligible for the one-year free upgrade.

"Windows Insiders running the Windows 10 Insider Preview (Home and Pro editions) with their registered MSA [Microsoft Account] connected to their PC will receive the final release build of Windows 10 starting on July 29," Aul said. "As long as you are running an Insider Preview build and connected with the MSA you used to register, you will receive the Windows 10 final release build."

In several tweets Friday, Aul expanded on the deal, which he had alluded to several months ago without spelling out details.

"Install [build] 10130, connect registered Insider MSA, upgrade to RTM [release to manufacturing], stays genuine," Aul said in one Twitter message on Friday when answering a reporter's question of, "So to be clear: install 10130, upgrade to RTM when available, and it'll stay genuine + activated with no money spent, forever?"

"Genuine" is Microsoft-speak for a legitimate, activated copy of its software. As of Sunday, build 10130 was the most recent of Windows 10; Microsoft released it on May 29.

The move as Aul outlined it would be unprecedented for the Redmond, Wash. company, which has historically turned a deaf ear to suggestions from public beta testers that they be rewarded for their work hunting down bugs with free software.

But while the decision evoked a more generous Microsoft, it was tempered by the reality that most customers running consumer- or business-grade editions of Windows 7 and 8.1 -- with the notable exception of Windows Enterprise, the for-volume-licensing-customers-only SKU (stock-keeping unit) -- will get a free copy of Windows 10 in any case.

The route to a free copy of Windows 10, Aul implied, would be of interest only to users who did not have a genuine-marked copy of Windows 7 Home Starter, Home Basic, Home Premium, Ultimate or Professional, or Windows 8.1 or Windows 8.1 Pro.

Those users would include people who had PCs currently running an ineligible OS, such as Windows Vista or the even older Windows XP, or who want to equip a virtual machine (VM) with Windows 10 on a device running another edition of Windows or, say, a Mac armed with software like VMware's Fusion or the open-source VirtualBox.

Aul's reference to build 10130 may mean that the window of opportunity for the free Windows 10 will shut once that is superseded by the next iteration.

More interesting, however, was an addition to Aul's blog made between its Friday debut and late Saturday: "It's important to note that only people running Genuine Windows 7 or Windows 8.1 can upgrade to Windows 10 as part of the free upgrade offer."

That line was tacked onto the end of the paragraph in which Aul had described the process by which Insider participants would be able to obtain the stable release on July 29, and that all testers -- whether they upgraded from Windows 7 or 8.1 or installed the preview on a wiped drive or VM -- would be able to run Windows 10 free of charge.

The blog post was also edited, removing the word "activated" from the original. The initial post said, "As long as you are running an Insider Preview build and connected with the MSA you used to register, you will receive the Windows 10 final release build and remain activated. Once you have successfully installed this build and activated, you will also be able to clean install on that PC from final media if you want to start over fresh [emphasis added]."

The revamped post deleted the words in bold above.

Microsoft's (or Aul's) changes threw doubt onto the statements Aul had made. He did not reply to a question posed via Twitter late Saturday about whether the process as he outlined still stood.

The removal of "activation" -- and the new line with the term "genuine" in it -- signaled that it did not. The simplest explanation is that while Microsoft will, in fact, give testers the stable build, it will not be activated with a product key, and thus "non-genuine" in Microsoft parlance, unless some other step is taken, perhaps a connection to a prior copy of Windows 7 or 8.1.

Non-genuine copies of Windows are marked as such with a watermark. Microsoft has not revealed what other restrictions might be placed on an unactivated or non-genuine copy of Windows 10.

Interpretation gymnastics are virtually required when parsing Microsoft's statements. Microsoft chooses its words carefully, and when it does disclose information, often does so in parcels that are by turns opaque, ambiguous and confusing to customers. That frequently forces it to retract or modify earlier comments.

Something similar occurred earlier this year when Microsoft seemed to say that non-genuine copies would be upgraded to legitimate versions of Windows 10. Days later the company walked back from that stance, saying that the free Windows 10 upgrade offer "will not apply to non-genuine Windows devices."

The confusion may be frustrating to some customers -- as in many other cases, customers who are among the most vocal of Microsoft's -- but moot for the vast majority of users, who will simply upgrade existing, and eligible, PCs. Microsoft's licensing is complex enough that there are countless edge cases where ambiguity is a side effect.

Still, the lack of clarity about many questions related to Windows 10 at this late date is disturbing, although not rare for Microsoft. At times the company seems entirely unable to come clean about its policies.

Computerworld, for example, installed the 10130 build from a disk image onto a new VM on a Mac -- not as an upgrade from one equipped with Windows 7 or 8.1 -- and although it was marked as "Windows is activated," that may not last.




Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, May 20, 2015

300-115 SWITCH Implementing Cisco IP Switched Networks

300-115 SWITCH Implementing Cisco IP Switched Networks

Exam Number 300-115 SWITCH
Associated Certifications CCNP Routing and Switching CCDP
Duration 120 minutes (45-65 questions)
Available Languages English
Register Pearson VUE
Exam Policies Read current policies and requirements
Exam Tutorial Review type of exam questions

Additional Resources
Implementing Cisco IP Switched Networks (SWITCH 300-115) is a qualifying exam for the Cisco CCNP Routing and Switching and CCDP certifications. The SWITCH 300-115 exam certifies the switching knowledge and skills of successful candidates. They are certified in planning, configuring, and verifying the implementation of complex enterprise switching solutions that use the Cisco Enterprise Campus Architecture.

Exam Description
Implementing Cisco IP Switched Networks (SWITCH 300-115) is a 120-minute qualifying exam with 45‒55 questions for the Cisco CCNP Routing and Switching and CCDP certifications. The SWITCH 300-115 exam certifies the switching knowledge and skills of successful candidates. They are certified in planning, configuring, and verifying the implementation of complex enterprise switching solutions that use the Cisco Enterprise Campus Architecture.

The SWITCH exam also covers highly secure integration of VLANs and WLANs.
The following topics are general guidelines for the content that is likely to be included on the exam. However, other related topics may also appear on any specific version of the exam. To better reflect the contents of the exam and for clarity, the following guidelines may change at any time without notice.

Need CCENT, CCNA or CCNP Study Tools to prepare for your exams? A Cisco Learning Network Premium Subscription is the ultimate resource for CCENT, CCNA Routing and Switching, and CCNP self-study.

1.0 Layer 2 Technologies 65%
1.1 Configure and verify switch administration
1.1.a SDM templates
1.1.b Managing MAC address table
1.1.c Troubleshoot Err-disable recovery

1.2 Configure and verify Layer 2 protocols
1.2.a CDP, LLDP
1.2.b UDLD

1.3 Configure and verify VLANs
1.3.a Access ports
1.3.b VLAN database
1.3.c Normal, extended VLAN, voice VLAN

1.4 Configure and verify trunking
1.4.a VTPv1, VTPv2, VTPv3, VTP pruning
1.4.b dot1Q
1.4.c Native VLAN
1.4.d Manual pruning

1.5 Configure and verify EtherChannels
1.5.a LACP, PAgP, manual
1.5.b Layer 2, Layer 3
1.5.c Load balancing
1.5.d EtherChannel misconfiguration guard

1.6 Configure and verify spanning tree
1.6.a PVST+, RPVST+, MST
1.6.b Switch priority, port priority, path cost, STP timers
1.6.c PortFast, BPDUguard, BPDUfilter
1.6.d Loopguard and Rootguard

1.7 Configure and verify other LAN switching technologies
1.7.a SPAN, RSPAN

1.8 Describe chassis virtualization and aggregation technologies
1.8.a Stackwise

2.0 Infrastructure Security 20%

2.1 Configure and verify switch security features

2.1.a DHCP snooping
2.1.b IP Source Guard
2.1.c Dynamic ARP inspection
2.1.d Port security
2.1.e Private VLAN
2.1.f Storm control

2.2 Describe device security using Cisco IOS AAA with TACACS+ and RADIUS

2.2.a AAA with TACACS+ and RADIUS
2.2.b Local privilege authorization fallback

3.0 Infrastructure Services 15%

3.1 Configure and verify first-hop redundancy protocols

3.1.a HSRP
3.1.b VRRP
3.1.c GLBP



QUESTION 1
What is the maximum number of switches that can be stacked using Cisco StackWise?

A. 4
B. 5
C. 8
D. 9
E. 10
F. 13

Answer: D

Explanation:


QUESTION 2
A network engineer wants to add a new switch to an existing switch stack. Which configuration
must be added to the new switch before it can be added to the switch stack?

A. No configuration must be added.
B. stack ID
C. IP address
D. VLAN information
E. VTP information

Answer: A

Explanation:


QUESTION 3
What percentage of bandwidth is reduced when a stack cable is broken?

A. 0
B. 25
C. 50
D. 75
E. 100

Answer: C

Explanation:


QUESTION 4
Refer to the exhibit.



Which set of configurations will result in all ports on both switches successfully bundling into an
EtherChannel?

A. switch1
channel-group 1 mode active
switch2
channel-group 1 mode auto
B. switch1
channel-group 1 mode desirable
switch2
channel-group 1 mode passive
C. switch1
channel-group 1 mode on
switch2
channel-group 1 mode auto
D. switch1
channel-group 1 mode desirable
switch2
channel-group 1 mode auto

Answer: D

Explanation:


QUESTION 5
Refer to the exhibit.



How can the traffic that is mirrored out the GigabitEthernet0/48 port be limited to only traffic that is
received or transmitted in VLAN 10 on the GigabitEthernet0/1 port?

A. Change the configuration for GigabitEthernet0/48 so that it is a member of VLAN 10.
B. Add an access list to GigabitEthernet0/48 to filter out traffic that is not in VLAN 10.
C. Apply the monitor session filter globally to allow only traffic from VLAN 10.
D. Change the monitor session source to VLAN 10 instead of the physical interface.

Answer: C

Explanation:

Saturday, April 25, 2015

Who’s behind Linux now, and should you be afraid?

Most Linux kernel code isn’t developed by who you might think. Here’s a closer look at why this matters.

If you think that Linux is still the "rebel code”—the antiestablishment, software-just-wants-to-be-free operating system developed by independent programmers working on their own time — then it's time to think again.

The Linux kernel is the lowest level of software running on a Linux system, charged with managing the hardware, running user programs, and maintaining security and integrity of the whole set up. What many people don’t realize is that development is now mainly carried out by a small group of paid developers.

A large proportion of these developers are working for "the man” -- large establishment companies in the software and hardware industries, for names like IBM, Intel, Texas Instruments and Cisco. That's according to a Linux Foundation report on Linux kernel development published in February. I
Nobody codes for free

In fact, it turns out that more than 80 percent of all Linux kernel development is "demonstrably done by developers who are being paid for their work," by these big (and sometimes smaller) companies, according to the report.

One organization that isn’t featured in the report's list of companies paying its staff to develop the Linux kernel is Microsoft, a company whose proprietary software model once made it enemy No. 1 for many in the open source movement, but which now claims to embrace free code.
INSIDER: SUSE Linux 12 challenges Red Hat

But one that is featured in the report is Huawei, the Chinese technology company founded by a former Chinese People's Liberation Army officer. That’s a possible cause for concern: The company denies having links to the Chinese government, but some governments, including those in the U.S., U.K. and Australia, have banned the purchasing of certain Huawei hardware products amid worries that they may contain software back doors that could be used for spying.

About 1 percent of all the changes to the Linux kernel are currently written by developers paid by Huawei, according to the report.
Keeping open source open

Amanda McPherson, vice president of developer forums at the Linux Foundation, points out that the whole point of open source software is to remain open to review and close scrutiny, in contrast to proprietary software that runs in many hardware products sold by Huawei and other companies.

“No one can submit a patch on their own," she says. "Security is always a concern, but every patch goes through maintainers, and there is lots of code review. That is a much more secure mechanism than a closed system with no source code availability."

That may be true, but the severe Heartbleed and Shellshock vulnerabilities recently discovered in the open source Bash and OpenSSL software demonstrate that insecure code can be introduced into open source products—unintentionally or perhaps deliberately —and remain undetected for years.

The fact that the vast majority of Linux kernel developers are paid to do so by their employers is a big change from the Linux that Linus Torvalds, then a student at the University of Helsinki, first announced on comp.os.minix in August 1991. At the time he said, "I'm doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones."

In fact, the volume of contributions from students and other volunteers to the Linux kernel has been in steady decline for some time, according to the report: from 14.6 percent of contributions in 2012 to just 11.8 percent now.

"I think that when we started collecting these figures, it was a surprise that so many contributors are paid, and in fact it still is a surprise to the general public. But Linux is a highly commercial enterprise," McPherson says. "Many people thought it was volunteers working in their basements. I think it is good that companies are contributing, even though they are contributing for selfish reasons. They are supporting Linux, but they can't own it or dictate how it is developed."

She points out that if Linux were an application, then paid-for developers would be adding features that met the needs of the corporations that paid them. But the kernel is much more low-level code, and the sorts of contributions that paid developers make often involve enabling hardware connections by providing kernel drivers.
Losing its amateur status

An interesting question, then, is why Linux kernel development has changed so much from the "just a hobby" approach originally envisioned by Torvalds back in 1991, to professional developers working on company time.

One obvious possible answer is that large enterprises, especially hardware manufacturers like Intel or Texas Instruments, have an interest in ensuring that there are Linux drivers for their hardware, and that the kernel can otherwise support their products. Over time, as Linux has become increasingly popular, this type of support has become increasingly important.

But McPherson believes a simpler reason is more plausible. "Kernel developers are in short supply, so anybody who demonstrates an ability to get code into the mainline tends not to have trouble finding job offers. Indeed, the bigger problem can be fending those offers off," the report says.

On a more positive note, the report does highlight some of the achievements of what McPherson describes as "the most collaborative software project in history."

Thanks to contributions from 11,695 developers working for over 1,200 companies, the kernel has been updated with major releases every 8 to 12 weeks. Each release includes more than 10,000 changes, which means that changes are accepted into the kernel at the staggering rate of more than seven every hour.


Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Thursday, April 16, 2015

The day the first iPad arrived

Five years ago Friday, April 3, 2010, the first Apple iPads were delivered to the public.

April 3, 2010
Tablets had always flopped so there was no shortage of naysayers pooh-poohing Apple’s new iPad when the first model was delivered to homes and made available in stores on April 3, 2010. While sales growth has slowed recently, the naysayers could not possibly have been more wrong. Here are some images from the iPad’s debut day.

Sign of things to come
A fan outside the Apple Store in New York City.

Lining up
In what has now become a ritualistic sight for Apple product launches, customers line up for the first iPad outside of a store in San Francisco.

Lining up the goods
A store employee prepares the product for sale in San Francisco.

Initial reaction
Andreas Schobel reacts after being among the first to purchase an iPad at the San Francisco store.

A Halloween costume to come
Lyle Haney walks along the waiting line wearing what would become a common Halloween costume.

300,000 sold that day
A worker rings up a sale in the New York store. Apple reported that it sold 300,000 iPads on that first day.

Mr. iFixIt among buyers
Luke Soules, co-founder of iFixit, was among the first to walk out of the Richmond, Va., store with a pre-ordered iPad. Here is the tear-down iFixit did on the machine.

Employees cheer
Store workers cheer as hundreds of shoppers enter the Chicago outlet.

'Hey Steve, here’s your iPad, buddy’
Steve would appear to be Steve Mays. The UPS guy who brought him his iPad is not identified, but you can hear him announce the delivery here.

You could Google it
And this is what the Google search results page looked like on Day One of the iPad.

It was front-page news
Including in the Honolulu Advertiser, for example, which warned readers to “expect a crowd.”

At the bar
By evening, many an iPad owner was enjoying a new way to end the day with a newspaper and a nig



Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, April 1, 2015

CRISC Certified in Risk and Information Systems Control

QUESTION 1
Which of the following is the MOST important reason to maintain key risk indicators (KRIs)?

A. In order to avoid risk
B. Complex metrics require fine-tuning
C. Risk reports need to be timely
D. Threats and vulnerabilities change over time

Answer: D

Explanation:
Threats and vulnerabilities change over time and KRI maintenance ensures that KRIs continue to
effectively capture these changes.
The risk environment is highly dynamic as the enterprise's internal and external environments are
constantly changing. Therefore, the set of KRIs needs to be changed over time, so that they can
capture the changes in threat and vulnerability.

Answer: B is incorrect. While most key risk indicator (KRI) metrics need to be optimized in respect
to their sensitivity, the most important objective of KRI maintenance is to ensure that KRIs
continue to effectively capture the changes in threats and vulnerabilities over time. Hence the most
important reason is that because of change of threat and vulnerability overtime.

Answer: C is incorrect. Risk reporting timeliness is a business requirement, but is not a reason for
KRI maintenance.

Answer: A is incorrect. Risk avoidance is one possible risk response. Risk responses are based
on KRI reporting, but is not the reason for maintenance of KRIs.


QUESTION 2
You are the project manager of a HGT project that has recently finished the final compilation
process. The project customer has signed off on the project completion and you have to do few
administrative closure activities. In the project, there were several large risks that could have
wrecked the project but you and your project team found some new methods to resolve the risks
without affecting the project costs or project completion date. What should you do with the risk
responses that you have identified during the project's monitoring and controlling process?

A. Include the responses in the project management plan.
B. Include the risk responses in the risk management plan.
C. Include the risk responses in the organization's lessons learned database.
D. Nothing. The risk responses are included in the project's risk register already.

Answer: C

Explanation:
The risk responses that do not exist up till then, should be included in the organization's lessons
learned database so other project managers can use these responses in their project if relevant.

Answer: D is incorrect. If the new responses that were identified is only included in the project's
risk register then it may not be shared with project managers working on some other project.

Answer: A is incorrect. The responses are not in the project management plan, but in the risk
response plan during the project and they'll be entered into the organization's lessons learned
database.

Answer: B is incorrect. The risk responses are included in the risk response plan, but after
completing the project, they should be entered into the organization's lessons learned database.


QUESTION 3
You are the project manager of GHT project. You have identified a risk event on your project that
could save $100,000 in project costs if it occurs. Which of the following statements BEST
describes this risk event?

A. This risk event should be mitigated to take advantage of the savings.
B. This is a risk event that should be accepted because the rewards outweigh the threat to the
project.
C. This risk event should be avoided to take full advantage of the potential savings.
D. This risk event is an opportunity to the project and should be exploited.

Answer: D

Explanation:
This risk event has the potential to save money on project costs, so it is an opportunity, and the
appropriate strategy to use in this case is the exploit strategy. The exploit response is one of the
strategies to negate risks or threats appear in a project. This strategy may be selected for risks
with positive impacts where the organization wishes to ensure that the opportunity is realized.
Exploiting a risk event provides opportunities for positive impact on a project. Assigning more
talented resources to the project to reduce the time to completion is an example of exploit
response.

Answer: B is incorrect. To accept risk means that no action is taken relative to a particular risk;
loss is accepted if it occurs. But as this risk event bring an opportunity, it should me exploited and
not accepted.

Answer: A and C are incorrect. Mitigation and avoidance risk response is used in case of negative
risk events, and not in positive risk events. Here in this scenario, as it is stated that the event could
save $100,000, hence it is a positive risk event. Therefore should not be mitigated or avoided.


QUESTION 4
You are the project manager of a large construction project. This project will last for 18 months
and will cost $750,000 to complete. You are working with your project team, experts, and
stakeholders to identify risks within the project before the project work begins. Management wants
to know why you have scheduled so many risk identification meetings throughout the project
rather than just initially during the project planning. What is the best reason for the duplicate risk
identification sessions?

A. The iterative meetings allow all stakeholders to participate in the risk identification processes
throughout the project phases.
B. The iterative meetings allow the project manager to discuss the risk events which have passed
the project and which did not happen.
C. The iterative meetings allow the project manager and the risk identification participants to
identify newly discovered risk events throughout the project.
D. The iterative meetings allow the project manager to communicate pending risks events during
project execution.

Answer: C

Explanation:
Risk identification is an iterative process because new risks may evolve or become known as the
project progresses through its life cycle.

Answer: D is incorrect. The primary reason for iterations of risk identification is to identify new risk
events.

Answer: B is incorrect. Risk identification focuses on discovering new risk events, not the events
which did not happen.

Answer: A is incorrect. Stakeholders are encouraged to participate in the risk identification
process, but this is not the best choice for the


QUESTION 5
You are the risk official in Bluewell Inc. You are supposed to prioritize several risks. A risk has a
rating for occurrence, severity, and detection as 4, 5, and 6, respectively. What Risk Priority
Number (RPN) you would give to it?

A. 120
B. 100
C. 15
D. 30

Answer: A

Explanation:
Steps involving in calculating risk priority number are as follows:
Identify potential failure effects
Identify potential causes
Establish links between each identified potential cause
Identify potential failure modes
Assess severity, occurrence and detection
Perform score assessments by using a scale of 1 -10 (low to high rating) to score these
assessments.
Compute the RPN for a particular failure mode as Severity multiplied by occurrence and detection.
RPN = Severity * Occurrence * Detection
Hence,
RPN = 4 * 5 * 6
= 120

Answer: C, D, and B are incorrect. These are not RPN for given values of severity, occurrence,
and detection.



Tuesday, March 24, 2015

642-736 Implementing Advanced Cisco Unified Wireless Security (IAUWS)


QUESTION 1
What is the purpose of looking for anomalous behavior on a WLAN infrastructure?

A. Identifying new attack tools
B. Auditing employee's bandwidth usage
C. Identifying attacks using signature matching
D. Improving performance by load balancing

Answer: A


QUESTION 2
As of controller release v5.2, which two statements about wired guest access support are true?
(Choose two.)

A. It is not supported on the Cisco 2100 Series Controllers.
B. No more than three wired guest access LANs can be configured on a controller.
C. Layer 3 web authentication and passthrough are not supported.
D. Wired guest access cannot be configured in a dual-controller configuration that uses an anchor
controller and a foreign controller.
E. The wired guest access ports must be in the same Layer 2 network as the foreign controller.

Answer: A,E


QUESTION 3
The wireless client can roam faster on the Cisco Unified Wireless Network infrastructure when
which condition is met?

A. EAP-FAST is used for client authentication on the wireless network.
B. Cisco Centralized Key Management is used for Fast Secure Roaming.
C. QoS is being used on the WLAN to control which client packets get through the network faster.
D. RRM protocol is used between multiple APs that the client associates to while roaming.

Answer: B


QUESTION 4
Which option best describes an evil twin attack?

A. A rouge access point broadcasting a trusted SSID
B. A rogue access point broadcasting any SSID
C. A rouge ad-hoc with the SSID "Free WiFi"
D. A rouge access point spreading malware upon client connection

Answer: A


QUESTION 5
Which two configuration parameters does NAC OOB require on a SSID/WLAN? (Choose two.)

A. WMM enabled on the WLAN
B. Open authentication on the WLAN
C. AAA override configuration on the WLAN
D. 802.1x configuration on the WLAN

Answer: B,D



Thursday, March 19, 2015

3V00290A APDS Avaya Scopia Online Test


QUESTION 1
You are proposing videoconferencing for a customer with 15 large meeting rooms, 25 small
meeting rooms, and 4000 employees dispersed over three continents: North America, Asia, and
Europe. Thirty percent of the workforce will be video-enabled, and you are proposing XT5000s for
the large meeting rooms and XT4200 for the small meeting rooms. Using the normal 1:10 ratio for
simultaneous rooms and users, how many ports (including cascading) and Elite 5000 MCUs
should be included in the design?

A. 440 352p ports or 4 Elite 5230 MCUs
B. 280 352p ports or 2 Elite 5230 MCUs
C. 152 352p ports or 3 Elite 5115 MCUs
D. 140 352p ports or 4 Elite 5110 MCUs
E. 136 352p ports or 3 Elite 5110 MCUs

Answer: C

Explanation:


QUESTION 2
Your customer, Jay, is reviewing your proposal for Scopia® video conferencing. He notices that
within Scopia Management, there is a SIP Back-to-Back User Agent and an internal gatekeeper
that could be external. When would you tell him he would use an external gatekeeper instead of
an internal gatekeeper?

A. In order to work with an external Microsoft SQL database
B. When running Scopia Management (iView) on a Linux server
C. To support configurations with multiple cascaded Elite MCUs
D. To support Scopia Management (iView) redundancy

Answer: D

Explanation:


QUESTION 3
Your customer is concerned about the ease of use for the infrequent video collaboration user. You
explain that your solution includes Scopia' Control. What is Scopia Control?

A. An iPad app for conference control.
B. An Android mobile device app for conference control.
C. An Android mobile device app for configuring the user's virtual room.
D. An iPad app for configuring the user's virtual room.

Answer: A

Explanation:
Scopia Control is an Apple iPad application for control of Avaya video conferencing systems. The
highly intuitive user interface virtually eliminates the learning curve for a video conferencing
system. The integrated conference room calendar and enterprise directory makes it easy to join
meetings and invite others. Room system control and meeting moderation are simple through the
iPad Multi-Touch user interface.
Reference: http://www.avaya.com/usa/product/avaya-scopia-xt-video-conferencing/


QUESTION 4
You are meeting with your Account Team and discussing a small SMB customer. You're hesitant
to select the Scopia® SMB solution with the MCU embedded in the XT1200, because it has some
differences from a configuration with an Elite MCU and Scopia Management (iView). Select three
capabilities the SMB solution does not support that you would discuss with the Account Team.
(Choose 3)

A. Support for Scopia Mobile users
B. Support for internal Scopia Desktop Client users
C. Support recording and streaming of conferences
D. Support for encryption of conferences over 4
E. Support for external Scopia Desktop Client users
F. Support multiple concurrent conferences

Answer: D,E,F
Reference: http://docs.radvision.com/bundle/rv_solution_guide_8/soln_sg_deployment_smb_limits


QUESTION 5
For users who operate out of the office, Scopia® offers desktop client and mobile applications.
Your friend Oliver, another SE, calls to ask you about a statement in the Scopia marketing
materials that says that Scopia is the best meet-me client because it is more than an endpoint.
Although there are many reasons, what two would you want to tell Oliver about? (Choose 2)

A. Error resiliency for both the desktop and mobile clients uses SVC (scalable video coding) and
Netsense
B. Users can download the presentation using the slider feature
C. User features such as chat, FECC (far end camera control), and raise hand
D. Best user experience with calendar integration and one tap to join
E. Simple and secure firewall traversal using HTTPS (hypertext transfer protocol secure)

Answer: D,E

Explanation:


Monday, March 2, 2015

Sensor tech makes predicting the future easier to do

Internet of Things industrial applications designed to forecast failure gain adoption

LAS VEGAS - We no longer need seers, oracles and psychics to tell us about the future. The declining cost of sensor technology and the availability of cloud-based analytical platforms is making predictive analytics accessible to every industry and most products.

These technologies give insights into how products are performing. Sensors record vibration, temperature, pressure and voltage, among other conditions, and provide data for real-time analysis. Sensors can help lead to discovery of faulty parts in products weeks before they actually fail.

Initial deployments of sensors have been in large and expensive industrial platforms, such as electrical generation systems and jet engines. In time, sensors connected to analytical platforms will be found in nearly every product.

The belief is that this technology will make machinery and systems more reliable. Sensors and analytics will alert users and vendors to problems days, weeks and months before a problem becomes visible. This insight into performance will also significantly reduce unplanned failures.

"We will know more about when they are going to fail, and how they fail," said Richard Soley, CEO of the Object Management Group, a nonprofit technology standards consortium.

Businesses will also benefit from learning how customers are using their products, which will shape how products are made, Soley said.

Predictive analytics capability in industrial applications is not a new concept. Big machinery has long used sensors. What is new is the convergence of three major trends that will make deployment ubiquitous, say people working in this area.

First, sensor technology is declining in price as it gets smaller and more efficient; second, wireless communication systems have become reliable and global; third, is that cloud-based platforms that can be used for analytics and development are emerging rapidly. Collectively, these trends underpin the Internet of Things.

At IBM's big conference, InterConnect, this week, the University of South Carolina was showing off a sensor-equipped gear box on an Apache helicopter that is part of study for the U.S. Army. There were four sensors on the gear box collecting temperature and vibration data.

One of the big savings in the use of this technology, aside from predicting failure, is correctly planning maintenance. Many maintenance activities may be unnecessary, wasteful and can introduce new problems.

"If you can reduce improper maintenance processes and improve the identification of faulty maintenance, you can directly impact safety," said Retired Maj. Gen. Lester Eisner, with South Carolina's National Guard, who is deputy director of the university's Office of Economic Engagement.

In another area, National Instruments has been working with utilities to deploy its sensor technology. Today, many utilities have employees who collect data directly off machines, which is something of a shotgun approach, said Stuart Gillen, principal marketing manager at the company and a speaker at the IBM conference.

All it takes is one or two "catches" – preventing a failure in a large system – to justify the cost of deploying technology that can take in all the data from these systems and provide a more targeted approach to maintaining them, Gillen said.

National Instruments is working with IBM and its recently launched Internet of Things capability, which is part of IBM's Bluemix cloud platform. This platform gives developers the ability to create new ways of working with the machine data.

There is much optimism that this technology will reduce equipment failures. Having the ability to see a little further into the future and reducing the need to rely on the benefit of hard-learned hindsight is the goal. But no one is predicting that this technology will eliminate failure all together.

"There are a lot of variables" that can contribute to equipment failure, said Sky Matthews, the CTO of IBM's Internet of Things effort, but this technology "can certainly dramatically reduce them."


Monday, February 23, 2015

How Etsy makes Devops work

Etsy, which describes itself as an online “marketplace where people around the world connect to buy and sell unique goods,” is often trotted out as a poster child for Devops. The company latched onto the concepts early and today is reaping the benefits as it scales to keep pace with rapid business growth. Network World Editor in Chief John Dix caught up with Etsy VP of Technical Operations Michael Rembetsy to ask how the company put the ideas to work and what lessons it learned along the way.

Let’s start with a brief update on where the company stands today.

The company was founded and launched in 2005 and, by the time I joined in 2008 (the same year as Chad Dickerson, who is now CEO), there were about 35 employees. Now we have well over 600 employees and some 42 million members in over 200 countries around the world, including over 1 million active sellers. We don’t have sales numbers for this year yet, but in 2013 we had about $1.3 billion in Gross Merchandise Sales.

How, where and when did the company become interested in Devops?
When I joined things were growing in a very organic way, and that resulted in a lot of silos and barriers within the company and distrust between different teams. The engineering department, for example, put a lot of effort into building a middle layer – what I called the layer of distrust – to allow developers to talk to our data bases in a faster, more scalable way. But it turned out to be just the opposite. It created a lot more barriers between database engineers and developers.

Everybody really bonded well together on a personal level. People were staying late, working long hours, socializing after hours, all the things people do in a startup to try to be successful. We had a really awesome office vibe, a very edgy feel, and we had a lot of fun, even though we had some underlying engineering issues that made it hard to get things out the door. Deploys were often very painful. We had a traditional mindset of, developers write the code and ops deploys it. And that doesn’t really scale.

How often were you deploying in those early days?
Twice a week, and each deploy took well over four hours.
"Deploys were often very painful. We had a traditional mindset of, developers write the code and ops deploys it. And that doesn’t really scale."

Twice a week was pretty frequent even back then, no?
Compared to the rest of the industry, sure. We always knew we wanted to move faster than everyone else. But in 2008 we compared ourselves to a company like Flickr, which was doing 10 deploys a day, which was unheard of. So we were certainly going a little bit faster than many companies, but the problem was we weren’t going fast with confidence. We were going fast with lots of pain and it was making the overall experience for everyone not enjoyable. You don’t want to continuously deploy pain to everyone.We knew there had to be a better way of doing it.

Where did the idea to change come from? Was it a universal realization that something had to give?
The idea that things were not working correctly came from Chad. He had seen quite a lot in his time at Yahoo, and knew we could do it better and we could do it faster. But first we needed to stabilize the foundation. We needed to have a solid network, needed to make sure that the site would be up, to build confidence with our members as well as ourselves, to make sure we were stable enough to grow. That took us a year and a half.

But we eventually started to figure out little things like, we shouldn’t have to do a full site deploy every single time we wanted to change the banner on the homepage. We don’t have any more banners on the homepage, but back in 2009 we did. The banner would rotate once a week and we would have to deploy the entire site in order to change it, and that took four hours. It was painful for everyone involved. We realized if we had a tool that would allow someone in member ops or engineering to go in and change that at the flick of a button we could make the process better for everyone.
"I can’t recall a time where someone walked in and said, “Oh my God, that person deployed this and broke the site.” That never happened. People checked their egos at the door."

So that gave birth to a dev tools team that started building some tooling that would let people other than operational folks deploy code to change a banner. That was probably one of the first Devops-like realizations. We were like, “Hey, we can build a better tool to do some of what we’re doing in a full deploy.” That really sparked a lot of thinking within the teams.

Then we realized we had to get rid of this app in the middle because it was slowing us down, and so we started working on that. But we also knew we could find a better way to deploy than making a TAR file and SSH’ing and R-synch’ing it out to a bunch of servers, and then running another command that pulls the server out of the load balancer, unpacks the code and then puts the server back in the load balancer. This used to happen while we sat there hoping everything is ok while we’re deploying across something like 15 servers. We knew we could do it faster and we knew we could do it better.

The idea of letting developers deploy code onto the site really came about toward the end of 2009, beginning of 2010. And as we started adding more engineers, we started to understand that if developers felt the responsibility for deploying code to the site they would also, by nature, take responsibility for if the site was up or down, take into consideration performance, and gain an understanding of the stress and fear of a deploy.

It’s a little intimidating when you’re pushing that big red button that says – Put code onto website –because you could impact hundreds of thousands of people’s livelihoods. That’s a big responsibility. But whether the site breaks is not really the issue. The site is going to break now and then. We’re going to fix it. It’s about making sure the developers and others deploying code feel empowered and confident in what they’re doing and understand what they’re doing while they’re doing it.

So there wasn’t a Devops epiphany where you suddenly realized the answer to your problems. It emerged organically?
It was certainly organic. If development came up with better ideas of how to deploy faster, operations would be like, “OK, but let’s also add more visibility over here, more graphs.” And there was no animosity between each other. It was just making things faster and better and stronger in a lot of ways.

And as we did that the culture in the whole organization begin to feel better. There was no distrust between people. You’re really talking about building trust and building friendships in a lot of ways, relationships between different groups, where it’s like, “Oh, yeah. I know this group. They can totally do this. That’s fine. I’ll back them up, no problem.” In a lot of organizations I’ve worked for in the past it was like, “These people? Absolutely not. They can’t do that. That’s absurd.”
"I didn’t marry my wife the first day I met her. It took me a long time to get to the point where I felt comfortable in a relationship to go beyond just dating. It takes longer than people think and they need to be aware of that because, if it doesn’t work after a quarter or it doesn’t work after two quarters, people can’t just abandon it."

And you have to remember this is in the early days where the site breaks often. So it was one of those things, like, OK, if it breaks, we fix it, but we want reliability and sustainability and uptime. So in a lot of ways it was a big leap of faith to try to create trust between each other and faith that other groups are not going to impact the rest of the people.

A lot of that came from the leadership of the organization as well as the teams themselves believing we could do this. Again, we weren’t an IBM. We were a small shop. We all sat very close to one another. We all knew when people were coming and leaving so it made it relatively easy to have that kind of faith in one another. I can’t recall a time where someone walked in and said, “Oh my God, that person deployed this and broke the site.” That never happened. People checked their egos at the door.

I was going to ask you about the physical proximity of folks. So the various teams were already sitting cheek by jowl?
In the early days we had people on the left coast and on the right coast, people in Minnesota and New York. But in 2009 we started to realize we needed to bring things back in-house to stabilize things, to make things a little more cohesive while we were creating those bonds of trust and faith. So if we had a new hire we would hire them in-house. It was more of a short term strategy. Today we are more of a remote culture than 2009.

But you didn’t actually integrate the development and operations teams?
In the early days it was very separate but there was no idea of separation. Depending upon what we were working on, we would inject ourselves into those teams, which led later to this idea of what we call designated operations. So when John Allspaw, SVP of Operations and Infrastructure, came on in 2010, we were talking about better ways to collaborate and communicate with other teams and John says, “We should do this thing called designated operations.”

The idea of designated ops is it’s not dedicated. For example, if we have a search team, we don’t have a dedicated operations person who only works on search. We have a designated person who will show up for their meetings, will be involved in the development of a new feature that’s launching. They will be injecting themselves into everything the engineering team will do as early as possible in order to bring the mindset of, “Hey, what happens if that fails to this third-party provider? Oh, yeah. Well, that’s going to throw an exception. Oh, OK. Are we capturing it? Are we displaying a friendly error for an end user to see? Etc.”

And what we started doing with this idea of designated ops is educate a lot of developers on how operations works, how you build Ganglia graphs or Nagios alerts, and by doing that we actually started creating more allies for how we do things. A good example: the search team now handles all the on-call for the search infrastructure, and if they are unavailable it escalates to ops and then we take care of it.

So we started seeing some real benefits by using the idea of this designated ops person to do cross-team collaboration and communication on a more frequent basis, and that in turn gave us the ability to have more open conversations with people. So that way you remove a lot of the mentality of, “Oh, I’m going to need some servers. Let me throw this over the wall to ops.”

Instead, what you have is the designated ops person coming back to the rest of the ops team saying, “We’re working on this really cool project. It’s going to launch in about three months. With the capacity planning we’ve done it is going to require X, Y and Z, so I’m going to order some more servers and we’ll have to get those installed and get everything up and running. I want to make everybody aware I’m also going to probably need some network help, etc.”

So what we started finding was the development teams actually had an advocate through the designated ops person coming back to the rest of the ops team saying, “I’ve got this.” And when you have all of your ops folks integrating themselves into these other teams, you start finding some really cool stuff, like people actually aren’t mad at developers. They understand what they’re trying to do and they’re extremely supportive. It was extremely useful for collaboration and communication.

So Devops for you is more just a method of work.

Correct. There is no Devops group at Etsy.

How many people involved at this point?

Product engineering is north of 200 people. That includes tech ops, development, product folks, and so on.

How do you measure success? Is it the frequency of deployments or some other metric?
Success is a really broad term. I consider failure success, as well. If we’re testing a new type of server and it bombs, I consider that a success because we learned something. We really changed over to more of a learning culture. There are many, many success metrics and some of those successes are actually failures. So we don’t have five key graphs we watch at all times. We have millions of graphs we watch.

Do you pay attention to how often you deploy?
We do. I could tell you we’re deploying over 60 times a day now, but we don’t say, “Next year we want to deploy 100 times a second.” We want to be able to scale the number of deploys we’re doing with how quickly the rest of the teams are moving. So if a designated ops or development team starts feeling some pain, we’ll look at how we can improve the process. We want to make sure we’re getting the features out we want to get out and if that means we have to deploy faster, then we’re going to solve that problem. So it’s not around the number of deploys.

I presume you had to standardize on your tool sets as you scaled.
We basically chose a LAMP stack: Linux, Apache, MySQL and PHP. A lot of people were like, “Oh, I want to use CoffeeScript or I want to use Tokyo Cabinet or I want to use this or that,” and it’s not about restricting access to languages, it’s about creating a common denominator so everyone can share experiences and collaborate.

And we wrote Deployinator, which is our in-house tool that we use to deploy code, and we open-sourced it because one of our principles is we want to share with the community. Rackspace at one point took Deployinator and rewrote a bunch of stuff and they were using it as their own deploying tool. I don’t know if they still are today, but that was back in the early days when it first launched.

We use Chef for configuration management, which is spread throughout our infrastructure; we use it all over the place. And we have a bunch of homegrown tools that help us with a variety of things. We use a lot of Nagios and Graphite and Ganglia for monitoring. Those are open-source tools that we contribute back to. I’d say that’s the vast majority of the tooling that ops uses at this point. Development obviously uses standard languages and we built a lot of tooling around that.

As other people are considering adopting these methods of work, what kind of questions should they ask themselves to see if it’s really for them?
I would suggest they ask themselves why they are doing it. How do they think they’re going to benefit? If they’re doing it to, say, attract talent, that’s a pretty terrible reason. If they’re doing it to improve the overall structure of the engineering culture, enable people to feel more motivated and ownership, or they think they can improve the community in which they’re responsible or the product they’re responsible for, that’s a really good reason to do it.

But they have to keep in mind it’s not going to be an overnight process. It’s going to take lots of time. On paper it looks really, really easy. We’ll just drop some Devops in there. No problem. Everybody will talk and it will be great.

Well no. I didn’t marry my wife the first day I met her. It took me a long time to get to the point where I felt comfortable in a relationship to go beyond just dating. It takes longer than people think and they need to be aware of that because, if it doesn’t work after a quarter or it doesn’t work after two quarters, people can’t just abandon it. It takes a lot of time. It takes effort from people at the top and it takes effort from people on the bottom as well. It’s not just the CEO saying, “Next year we’re going to be Devops.” That doesn’t work. It has to be a cultural change in the way people are interacting. That doesn’t mean everybody has to get along every step of the way. People certainly will have discussions and disagreements about how they should do this or that, and that’s OK.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com