The first time right approach is a quality management concept that emphasizes doing things correctly the first time to prevent errors, waste, and rework. Here are some key steps for implementing the first time right approach in operations:
Develop Standard Operating Procedures (SOPs): SOPs provide clear and detailed instructions for how to carry out tasks correctly. Developing SOPs is an essential step in ensuring that everyone involved in operations understands the right way to do things.
Train and educate employees: It is crucial to train and educate employees on the correct procedures and ensure they understand the importance of following them. Providing regular training and refresher courses can help reinforce the importance of the first time right approach.
Use checklists: Checklists help to ensure that all critical steps are followed and that nothing is overlooked or missed. Checklists can also provide a record of work completed, making it easier to identify issues and opportunities for improvement.
Implement quality control processes: Quality control processes can help to identify and address issues before they become bigger problems. Regular inspections, quality control checks, and testing can help ensure that operations are running correctly.
Continuously monitor and evaluate: Monitoring and evaluating operations can help identify areas for improvement and ensure that the first time right approach is being followed. It is essential to measure and track key performance indicators (KPIs) to identify issues and opportunities for improvement.
IT infrastructure projects are complex and time-sensitive, often involving significant cost. From work environment relocation to data center builds, projects can include precision construction, systems design and integration, anticipation of future standards evolution and facility expansion. An outsourced project manager (PM) with recent success on similar projects has the experience needed. Engaging a skilled expert in IT infrastructure project management increases the chance of successful completion.
Companies have found their business processes benefit from PM management and facilitation. These benefits include resource coordination, communication with team members, budget monitoring and reporting updates to management. The PM provides the integrated vision which leads the project to a successful completion. PMI certification indicates the PM understands the tools and processes of professional project management. For IT, the PMI-certified PM adds an additional element: knowledge of IT-related subject matter.
Critical IT needs expert design and construction. A PM with experience in Tier 3 and 4 data center designs, copper and fiber network installation and large-scale power and cooling can guide a large project to a successful conclusion and ask the right questions along the way. IT PMs are also prepared to work with leading-edge technologies from renewable power sources to KyotoCooling heat reduction.
Group downtime has to be minimal during an office relocation. A PM can coordinate the physical move, electrical and power installation and reconnection at the destination. While the move preparation is coordinated over weeks or months, the timeline for completion can be over a single night or weekend. Precision planning and organization make it possible, while the PM’s understanding of the equipment, connectors, wiring, cabling and network equipment required is essential to ensure everything is ready for the move.
Vendor lead times, product substitutions, shipping, delivery and uncrating of delicate, expensive equipment need to be managed carefully. Orchestrating the positioning and connection of rows of equipment means knowing how the puzzle pieces fit together, and the PM makes sure everything happens in the proper order.
A PM with IT consulting experience asks the right questions upfront. Understanding key differences in cable parameters, connector types, printer and network closet configuration and wireless endpoint types help him or her double-check the design. Adding steps such as the use of TDR cable testing can raise confidence levels and ensure data can flow to each workstation even before routers and switches are powered up. The PM’s experience and technical expertise help a weekend move lead to a trouble-free Monday morning.
Between design and implementation, available products and supplies often change. A PM versed in IT terms and equipment can work with vendors and designers to replace obsolete or unavailable equipment. As needed, the PM can acquire new cable suppliers compliant with the necessary electrical standards and compatible with the connectors in stock to ensure new networking hardware has the support for incoming optical connections.
IT PMs work on a peer basis with vendors, speaking the same technical language and meeting design goals with cost-effective equipment. With control over delivery timing and quantities per order, PMs can help improve the purchasing process for the best vendor pricing.
An experienced IT project outsourcing manager brings expertise from large numbers of projects, big and small. The customer benefits from the PM’s industry-specific knowledge of processes and equipment, and the resulting cost management and on-time successful completion.
Contact us today to learn more about our IT Infrastructure Project Management outsourcing, Office Relocation, Data Center Builds/Relocations, Business Continuity Planning, and Business Analysis for IT.
Lack of Resources: One of the primary reasons for policy implementation failure is the lack of resources allocated towards it. Resources could include financial, human, or technical resources. Without adequate resources, policy implementation can become difficult, leading to failure.
Resistance to Change: Policies often require changes in behavior, procedures, and practices, and people are generally resistant to change. Resistance from stakeholders, including employees, organizations, and the public, can hinder successful policy implementation.
Inadequate Planning: Insufficient planning, including a lack of clear goals, objectives, and strategies, can lead to implementation failure. If policymakers fail to anticipate potential obstacles or lack effective implementation plans, it can become challenging to carry out the policy.
Lack of Collaboration and Communication: Implementation failure can also occur when there is a lack of collaboration and communication among stakeholders. Without clear and effective communication and collaboration among policymakers, implementers, and stakeholders, policy implementation can become fragmented, leading to a lack of coherence and effectiveness.
Contact us today to learn more about our Implementation Engineering Services
There are a few guidelines that can help to deliver a secure and modular network environment:
Strong authentication to allow controlled access to information assets. Two factor authentication acts as an extra layer of security for logins, ensuring that attempted intrusions are halted before any damage is done.
Hardening of mobile and IoT devices that connect to the network. Access control policies define high-level requirements that determine who may access information, and under what circumstances that information can be accessed.
Embedded security services inside devices and applications. Embedded security solutions can help protect devices ranging from atm’s to automated manufacturing systems. Features including application whitelisting, antivirus protection, and encryption can be embedded to help protect otherwise exposed IoT devices.
Collecting security intelligence directly from applications and their hosts. Maintaining an open communication line with cloud service providers like AWS can greatly increase security protections. Application and service managers understand how to integrate shared security with their systems better than anyone else.
UX consultants help clients gain a clearer understanding of who their customers are and what they want.
Working solo, it can be hard to (or nearly impossible) to oversee a large scale user research project, so UX consultants have to figure out what companies already know about their customers—which often means sitting down with company leaders to draw out key insights.
During this phase, consultants may also conduct user interviews, draft and collect surveys, and review any existing quantitative data that’s relevant to how customers use the company’s product.
UX consultants perform audits of apps, Saas products, and websites.
No two projects are the same, but over the years, UX consultants witness patterns emerge. Things that don’t work, ever, become abundantly clear. Design principles and strategies that prove trustworthy are held close.
When hired to audit an app, SaaS product, or website, consultants rely on past experience and an acute knowledge of interface design to create reports that highlight glaringly bad design features.
Audits can uncover all kinds of problems, but they tend to emphasize issues that can be improved quickly. “If we replace this pixelated image and massive wall of text and put a clear call-to-action with a button here, we’ll immediately convert more customers.”
UX consultants build prototypes and perform usability testing.
When an audit uncovers deeper UX issues (like poor information architecture), quick fixes won’t work, and consultants have to administer comprehensive care. A prototype must be designed, tested, and iterated upon.
Depending on the project, there are varying degrees of detail that consultants can pursue, but in most cases, functional wireframes and a handful of target users (5-7) will provide a clear picture of a digital product’s usability.
If a UX consultant uncovers serious design issues during an audit, it may be necessary to completely rethink the way a digital product works.
UX consultants roadmap user experience strategies.
Our interactions with digital are always evolving. Hardware, software, platforms, and societal expectations are dynamic. If businesses only plan for today’s technology, they’ll quickly be left behind. It’s better to operate from a strategic plan that is extremely focused on users.
What do they want?
What do they need?
Where are they spending their time?
How can we continue to provide a world-class user experience even as their behavior and technology change?”
These are huge questions. Because of their holistic, big-picture understanding of design, UX consultants are well-equipped to provide a framework of answers that companies can use to maintain digital relevancy.
UX consultants provide ongoing direction and means to measure the effectiveness of UX.
Consultants don’t simply write one-and-done reports and leave clients hanging when issues arise. Strategies aren’t always executed as planned, and at times, recommendations may need to be revised.
Many consultants are rehired or placed on retainer to continue advising from a big-picture perspective, ensuring that the overall health of a company’s UX stays strong.
UX consultants engage and educate key staff members.
Consultants don’t have the relational and experiential history that staff do. In many ways, they’re outsiders.
One of the most important things that a UX consultant can do is include staff in their process and give them reasons to be excited about the (inevitable) changes that must be made. There’s also an element of education that needs to happen. It’s not just, “here are the changes,” but “here’s why we’re changing and how it will improve the UX.”
If these things don’t happen, consultants run the risk of fostering an adversarial mindset in staff.
There’s definitely an overlap between the daily responsibilities of UX designers and consultants, but there are also important differences. UX consultants operate under a distinct set of client expectations. There’s a shift in priorities and mindset.
Here are the key characteristics of each role.
It’s one thing to know what a UX consultant does but another thing entirely to know how to become one.
Freelance UX designers may have an inside path because they’re used to the entrepreneurial lifestyle. Hunting down leads, selling to clients, and self-managing projects are already a part of their normal work life. The biggest challenge for UX freelancers looking to become consultants is repositioning the nature of their services. There will be a period of explaining to old clients and relearning how to market to new ones.
For UX designers who haven’t experienced full-time freelancing, there are two paths that make more sense than quitting a steady job and jumping headlong into consulting.
Join a consultancy and learn from actual UX consultants.
For those who are interested in consulting, but a tad bit intimidated by the waters of self-employment, it’s not a bad idea to join a consultancy. Some of the prominent UX consulting firms are Accenture Interactive, Boston Consulting Group, McKinsey, Forrester, IDEO, Frog Design, and Fjord.
Working at a consultancy can provide exposure to projects in multiple industries as well as opportunities to learn the ins-and-outs of the UX consulting process. Depending on the consultancy, designers may even be asked to help quote projects, sit in on presentations, and pitch ideas to clients.
To get the most out of working at a consultancy, designers need to stay observant, learn the business side of UX, and dive into unfamiliar roles. Otherwise, the long-held patterns of an “employee” mindset will be difficult to leave behind.
Keep the day job, but start with smaller, paid projects.
Aspiring consultants with solid UX jobs should consider staying employed as they slowly ramp up with paid side projects.
Start by mining personal networks for trustworthy people who might stand to benefit from an improved user experience. Former employers, small business owners, and leaders of community organizations are good options. Don’t worry about “warm” leads or crafting the perfect pitch—sales skills take time to grow. Honesty, humility, and a willingness to listen are the biggest factors in landing clients.
After three to five jobs, a new consultant will have a better understanding of his process, communication style, and the pace it’ll take to keep his project pipeline full.
One more thing. Avoid unpaid or ‘trade-for-services’ projects. They’re fraught with all kinds of problems and warped expectations. It’s ok to offer a reduced rate, but there’s a lot to be learned from the practice of estimating, quoting, and collecting money from clients.
In addition to working with clients, UX consultants need to stay on top of a number of organizational tasks like meetings, client correspondence, and invoicing.
Opportunities within the field of UX design are remarkably varied. In every industry, there are difficult problems begging to be solved—problems requiring more than the sum total of one consultant’s knowledge.
UX consultant is one of many roles that designers may transition into over the course of a career.
For those considering a switch to consulting, a word of warning: Attaining titles doesn’t equal automatic satisfaction. Studies consistently show that the happiest workers:
Do jobs that align with their abilities;
Use their skills to help others;
Partner with people and organizations that trust and support them.
Job title, salary, and low-stress roles aren't necessarily the keys to a rewarding career. (80,000 hours).
Becoming a UX consultant isn’t necessarily a step into something “better.” It’s simply a new challenge. For designers with the right mindset (and a polished skillset), the switch from UX designer to consultant will bring trial, reward, and all the wondrous unpredictability that comes with a life in design.
If you’re looking for a manager of your website to help you maintain your web presence and meet the expectations of today’s web-savvy consumers, consider hiring a flexible professional webmaster. Then keep them around month after month to help with various effort as needs come and go. Also if you the following requirements
Need someone who you can rely on for quick fixes? Time on the clock with me means any issue at anytime can be addressed quickly. You can rely on me.
Want to rest assured your website is meeting users needs and functioning as intended 24/7? Be assured with me as your webmaster.
Need to make sure someone is keeping an eye on the uptime and security of the website.
Looking to bring in a new SEO team to help but don't want to hand the full set of website keys to them because you've hired others before, and its pain to hand over control then take it back.
Having a hard time managing long term goals because you're stuck on a legacy Content Management System.
Is your website slow to load and your users are bouncing?
Convinced? Click here to get our services.
Here's why you should think again about commissioning
In the context of planning and delivering a new hyperscale data center, historically the hierarchy of priorities would consign the subject of commissioning to somewhere close to the bottom of a long list of action points.
But there’s an increasing body of evidence emerging that this mindset may be seriously undermining your ability to effectively deliver, within an era of accelerated programs and increasingly punitive SLAs.
Here is the seven steps to explain why you should think again about commissioning:
The ability to commission equipment should be considered at the earliest stage of every project. Building in a commissioning schedule into the program, sequencing how you will access equipment during the build process is essential. Time spent considering logistical challenges will be handsomely rewarded with seamless integration throughout the build program.
Get your technical services teams engaged at the outset - providing their input and insight into the design of buildings services at the initial stages where they can help develop a comprehensive schedule of the testing and commissioning process. Make the testing regime system simple, efficient and standardized - and most importantly transparent, so that commissioning is readily tracked and recorded centrally with a documentation output.
Don’t assume that technology will solve all challenges. Documentation is often grouped and not produced progressively resulting in the late release of vital documents and project delays. A comprehensive plan must include a phased schedule and record of necessary documentation.
Never make assumptions that products and systems will operate seamlessly unless you have the hard data to back this up. Not all products undertake a witnessed factory acceptance test, so unless you have verified data that you can successfully integrate these within your network, then you must validate compliance before installation begins. Costly and time-consuming issues can be avoided with a thorough interrogation long before any product arrives on site.
Sounds simple but it’s so often overlooked on a busy site. Make sure that any delivered equipment is visually inspected for signs of damage. Any defects should be immediately reported, and a swift resolution sought. Smart tags should be fixed to equipment to provide the unique identification of equipment and associated commissioning data during construction and post-completion.
Each product and service should be physically and independently tested on site to verify performance criteria and ensure alignment with the design and specification. This is considered as the SAT (Site acceptance testing). Remember, the physical testing operation is not synonymous with the release of the testing documentation, which needs to be independently tracked to ensure the process is completed.
Data networks are at the heart of data center systems. All data transmission networks are to be independently certified ahead of any joint systems testing to ensure the communication between equipment is functional. The isolation and certification of these networks is the precursor to full operations testing, but it’s easy to get this sequencing wrong and create needless delays.
The final integrated system test is the opportunity to observe the performance of a data center at maximum design load. Absolute rigor and attention to detail is fundamental at this stage, measuring and accurately recording switch positions, environmental conditions and failure scenarios to ensure operational compliance. Efficient progression to this stage marks the operational handover of the data center.
Changing the conversation with customers and key project stakeholders about the importance of commissioning is pivotal if you want to meet expectations for faster, day one operational data center facilities.
By identifying those critical pathways and processes that can have the most detrimental impact on program, one can enhance project collaboration to deliver a better outcome. As technology advances, we can expect to see dynamic live reporting fall within our arsenal - however our adage will always remain the same - ignore the importance of commissioning at your peril.
Before the beautiful web pages shine on the internet, before the eyes of the owner gets watery with ecstasy, before the web app attracts huge traffic, before it becomes a badass revenue generating machine and before it creates an apogee of user experience, the web application goes through various stages that people are generally unaware about.
There are hours and hours spend on designing the perfect UI/UX; monitors cramped with thousands of lines of codes; altercation between the designers and developers on why the button is a pixel off from the actual design. And of course, gallons of coffee sipped down the throat.
Alright, the process of developing a web app may seem baffling for most of you but it is equally important to be aware of. And if you are looking to hire a web development company to build one, you must have an idea on how your project will be developed. In this blog, we have listed down the seven stages of the web development life cycle and explained the usual web development process that we follow.
1. Understanding Client’s Needs
2. Deep Research and Analysis
3. Planning
4. Design
5. Development
6. Testing and Deployment
7. Post Deployment and Maintenance
It is quite often believed that the web development process starts with the design and development but the fact is these stages arrive quite later. The first step and indeed the most crucial one (and often ignored) is understanding the client’s needs.
Identifying and understanding what exactly the clients want helps in providing the perfect solution they are looking for. In some cases, when the clients have a technical background, it’s a lot easier to understand the needs and technicality they want in the projects. However, when the clients are entirely unknown to the web and programming world, we ask questions and further clarification that helps to serve them better.
Each and every app is different from one another. So at this stage, our team researches and gathers as much relevant information as possible for the project. An e-commerce web app selling men’s apparel will differ from an online job marketplace. Thus, a deep research and analysis about the industry, target audience, competitors, the motto of the project, the outcomes, etc. provide insights and knowledge required to develop an impeccable web app.
For instance, one of our project Getlitt is a Public Service platform with a huge archive of events. Thus to make an impact, we performed deep research and analysis to create a platform that would encourage the audience to engage more online.
“Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” ― Abraham Lincoln
Did we say understanding the client’s need is the most crucial stage? Here add another to the list – Planning. A solid plan backed by the deep research and analysis is a roadmap towards the destination. At this stage, we define deliverables, sprints and Gantt charts with projected timeline and resources to accomplish it.
Varied other major decisions such as formulating the sitemap, wire-framing, planning the layout, UI/UX, selecting the right technology stack, etc. are made at this stage.
One of the important parts of the planning stage is deciding the sitemap. It is an organized structure of your web app that connects different pages as per the hierarchy and importance of the pages. It helps the users to easily navigate through the website. Here check the figure below.
While wireframes and mockups, on the other hand, gives an outline of the web pages. A low fidelity wireframe without any pictures and logo can be drawn on paper and even a sophisticated software. There are many tools out in the market that provide ease of creating a wireframe. All the planning takes place with the involvement of the client so that the client knows how exactly the blueprint of his project is formed.
Once we have the wire-frames and the sitemap, it’s time to design each page of the web application. At this stage, graphic designers breath life to the approved wire-frames with custom graphics, logo, colors, typography, animations, buttons, drop-down menus and more based on the project needs. Thus, your web app gets a tangible identity.
The design of the application is critical to the user experience. The first impression users have of a website are as much as 94% design-related. So, it is imperative to make sure your web app is aesthetically alluring to your audience. Even the minutest details like shadows of graphics or color of the call to action button must be precisely taken into consideration. In fact, the colors of the website play a monumental role in providing a better experience to the users. As per a research, consumers form an initial judgment of a product within in 90 seconds of interaction and 62%-90% of them are based on the color. Further, different colors can evoke different emotions. Thus, cerebrally utilizing the colors while designing can bring you better results.
And now let’s come down to the nitty-gritty of the web application, i.e. the development. This is the stage where the designs approved by the client are transformed into a working model. The development process can be divided into two parts, i.e. frontend and backend.
Front-end Development
Front-end, as per the name refers, is the development of the client side app that is seen by the users. All the designs made during the previous stage are converted to HTML pages with necessary animations and effects. And to add some sophisticated functionalities JavaScript framework/libraries such as Angular, React, Vue, Meteor, etc. are used. Considering the importance of mobile devices, making the web app responsive and mobile friendly has become equally important.
Back-end Development
Back-end refers to the development of the server side app that is the soul of the frontend and makes the user interface into a working web app. The backend developers create the server-side application, database, integrates business logic and everything that works under the hood.
After the web app is developed and before deploying it to the server, it goes through several meticulous tests to ensure that there are no bugs or issues. The quality team performs tests such as functionality test, usability test, compatibility test, performance test, etc. that ensures the web application is ready for the users and launch. Further, these testing also helps in discovering the ways to improve the web app in near future. Once the quality assurance team shows the green flag for the web app, it is deployed to the server using FTP (File Transfer Protocol).
But it doesn’t end there….
The web development process doesn’t end after deployment. There are several post-deployment tasks to be carried out by the web development company such as providing the clients with the source code and project documents, working on their feedback and the post-deployment support and maintenance. This stage holds equal gravity because the real purpose of the web app starts once it’s live for the users. Further changes according to the user’s feedback, support and maintenance as well as new updates are equally necessary.
Web development process consists of several stages, starting from understanding client’s, deep research and analysis to design, development and beyond. And sometimes, it can be a time consuming and daunting task. But thanks to the immense resources across the internet. There are many blogs and tutorials helping owners to dive deep into this process. Even there are seasoned web development companies catering services to the clients.
Source: https://www.techuz.com/blog/web-development-process-a-guide-to-complete-web-development-life-cycle/
The characteristics of red teams and blue teams are as different as the techniques they use. This will provide you more insight into the purpose and roles these two teams play. You’ll also better understand if your own skills fit into these cybersecurity job descriptions, helping you choose the right road.
Get into the mind of an attacker and be as creative as they can be.
1. Think outside the box
The main characteristic of a red team is thinking outside the box; constantly finding new tools and techniques to better protect company security. Being a red team bears a level of rebellion as it is a taboo—you’re going against rules and legality while following white hat techniques and showing people the flaws in their systems have. These aren’t things everyone likes.
2. Deep knowledge of systems
Having deep knowledge of computer systems, protocols and libraries and known methodologies will give you a clearer road to success.
It’s crucial for a red team to possess an understanding of all systems and follow trends in technology. Having knowledge of servers and databases will allow you more options in finding ways to discover their vulnerabilities.
3. Software development
The benefits of knowing how to develop your own tools are substantial. Writing software comes with a lot of practise and continuous learning, so the skill set obtained with it will help any red team perform the best offense tactics possible.
4. Penetration testing
Penetration testing is the simulation of an attack on computer and network systems that helps assess security. It identifies vulnerabilities and any potential threats to provide a full risk assessment. Penetration testing is an essential part of red teams and is part of their “standard” procedures. It’s also used regularly by white hats; in fact, a red team adopts many tools that ethical hackers use.
5. Social engineering
While performing security audits of any organization, the manipulation of people into performing actions that may lead to the exposure of sensitive data is important, since human error is one of the most frequent reasons for data breaches and leaks.
You’ll have to cover backdoors and vulnerabilities most people don’t even know about.
1. Organized and detail-oriented
Someone who plays more ‘by the book’ and with tried and trusted methods is more fitting as a blue team member. An extraordinarily detail-oriented mindset is needed to prevent leaving gaps in a company’s security infrastructure.
2. Cybersecurity analysis and threat profile
When assessing the security of a company or an organization, you will need to create a risk or threat profile. A good threat profile contains all data that can include potential threat attackers and real-life threat scenarios, thorough preparation for any future attacks by working on fronts that may be weak. Make use of OSINT and all publicly available data, and check out OSINT tools that can help you gather data about your target.
3. Hardening techniques
To be truly prepared for any attack or breach, technical hardening techniques of all systems need to occur, reducing the attack surface hackers may exploit. Absolutely necessary is hardening of the DNS, as it is one of the most overlooked in hardening policies. You can follow our tips to prevent DNS attacks to reduce the attack surface even more.
4. Knowledge of detection systems
Be familiar with software applications that allow tracking of the network for any unusual and possibly malicious activity. Following all network traffic, packet filtering, existing firewalls and such will provide a better grip on all activity in the company’s systems.
5. SIEM
SIEM, or Security Information and Event Management, is a software that offers real-time analysis of security events. It collects data from external sources with its ability to perform analysis of data based on a specific criteria.
You would think that when it comes to a red team or a blue team that you’d favor one over the other, but the truth is a complete and effective security infrastructure prepared for any cyber attack is possible only with the two teams working together.
The entire cybersecurity industry needs to know more about engaging both teams to work together and learn from each other. Some might call it the purple team, but whatever you call it, the unity of the red and blue teams is the only road to true and thorough cybersecurity.
Source: https://securitytrails.com/blog/cybersecurity-red-blue-team
In comparison to other forensic sciences, the field of computer forensics is relatively young. Unfortunately, many people do not understand what the term computer forensics means and what techniques are involved. In particular, there is a lack of clarity regarding the distinction between data extraction and data analysis. There is also confusion about how these two operations fit into the forensic process. The Cybercrime Lab in the Computer Crime and Intellectual Property Section (CCIPS) has developed a flowchart describing the digital forensic analysis methodology. Throughout this article, the flowchart is used as an aid in the explanation of the methodology and its steps.
The Cybercrime Lab developed this flowchart after consulting with numerous computer forensic examiners from several federal agencies. It is available on the public Web site at www.cybercrime.gov/forensics_gov/forensicschart.pdf. The flowchart is helpful as a guide to instruction and discussion. It also helps clarify the elements of the process. Many other resources are available on the section's public Web site, www.cybercrime.gov. In addition, anyone in the Criminal Division or U.S Attorneys' offices can find additional resources on the new intranet site, CCIPS Online. Go to DOJ Net and click on the "CCIPS Online" link. You can also reach us at (202) 514-1026.
The complete definition of computer forensics is as follows: "The use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal…." A Road Map for Digital Forensic Research, Report from the First Digital Forensic Research Workshop (DFRWS), available at http://dfrws.org/2001/dfrws-rm-final.pdf.
Defining computer forensics requires one more clarification. Many argue about whether computer forensics is a science or art. United States v. Brooks, 427 F.3d 1246, 1252 (10th Cir. 2005) ("Given the numerous ways information is stored on a computer, openly and surreptitiously, a search can be as much an art as a science."). The argument is unnecessary, however. The tools and methods are scientific and are verified scientifically, but their use necessarily involves elements of ability, judgment, and interpretation. Hence, the word "technique" is often used to sidestep the unproductive science/art dispute.
The key elements of computer forensics are listed below:
The use of scientific methods
Collection and preservation
Validation
Identification
Analysis and interpretation
Documentation and presentation
The Cybercrime Lab illustrates an overview of the process with Figure 1. The three steps, Preparation/Extraction, Identification, and Analysis, are highlighted because they are the focus of this article..
In practice, organizations may divide these functions between different groups. While this is acceptable and sometimes necessary, it can create a source of misunderstanding and frustration. In order for different law enforcement agencies to effectively work together, they must communicate clearly. The investigative team must keep the entire picture in mind and be explicit when referring to specific sections.
The prosecutor and forensic examiner must decide, and communicate to each other, how much of the process is to be completed at each stage of an investigation or prosecution. The process is potentially iterative, so they also must decide how many times to repeat the process. It is fundamentally important that everyone understand whether a case only needs preparation, extraction, and identification, or whether it also requires analysis.
The three steps in the forensics process discussed in this article come after examiners obtain forensic data and a request, but before reporting and case-level analysis is undertaken. Examiners try to be explicit about every process that occurs in the methodology. In certain situations, however, examiners may combine steps or condense parts of the process. When examiners speak of lists such as "Relevant Data List," they do not mean to imply that the lists are physical documents. The lists may be written or items committed to memory. Finally, keep in mind that examiners often repeat this entire process, since a finding or conclusion may indicate a new lead to be studied.
Examiners begin by asking whether there is enough information to proceed. They make sure a clear request is in hand and that there is sufficient data to attempt to answer it. If anything is missing, they coordinate with the requester. Otherwise, they continue to set up the process.
The first step in any forensic process is the validation of all hardware and software, to ensure that they work properly. There is still a debate in the forensics community about how frequently the software and equipment should be tested. Most people agree that, at a minimum, organizations should validate every piece of software and hardware after they purchase it and before they use it. They should also retest after any update, patch, or reconfiguration.
When the examiner's forensic platform is ready, he or she duplicates the forensic data provided in the request and verifies its integrity. This process assumes law enforcement has already obtained the data through appropriate legal process and created a forensic image. A forensic image is a bit-for-bit copy of the data that exists on the original media, without any additions or deletions. It also assumes the forensic examiner has received a working copy of the seized data. If examiners get original evidence, they need to make a working copy and guard the original's chain of custody. The examiners make sure the copy in their possession is intact and unaltered. They typically do this by verifying a hash, or digital fingerprint, of the evidence. If there are any problems, the examiners consult with the requester about how to proceed.
After examiners verify the integrity of the data to be analyzed, a plan is developed to extract data. They organize and refine the forensic request into questions they understand and can answer. The forensic tools that enable them to answer these questions are selected. Examiners generally have preliminary ideas of what to look for, based on the request. They add these to a "Search Lead List," which is a running list of requested items. For example, the request might provide the lead "search for child pornography." Examiners list leads explicitly to help focus the examination. As they develop new leads, they add them to the list, and as they exhaust leads, they mark them "processed" or "done."
For each search lead, examiners extract relevant data and mark that search lead as processed. They add anything extracted to a second list called an "Extracted Data List." Examiners pursue all the search leads, adding results to this second list. Then they move to the next phase of the methodology, identification.
Examiners repeat the process of identification for each item on the Extracted Data List. First, they determine what type of item it is. If it is not relevant to the forensic request, they simply mark it as processed and move on. Just as in a physical search, if an examiner comes across an item that is incriminating, but outside the scope of the original search warrant, it is recommended that the examiner immediately stop all activity, notify the appropriate individuals, including the requester, and wait for further instructions. For example, law enforcement might seize a computer for evidence of tax fraud, but the examiner may find an image of child pornography. The most prudent approach, after finding evidence outside the scope of a warrant, is to stop the search and seek to expand the warrant's authority or to obtain a second warrant.
If an item is relevant to the forensic request, examiners document it on a third list, the Relevant Data List. This list is a collection of data relevant to answering the original forensic request. For example, in an identity theft case, relevant data might include social security numbers, images of false identification, or e-mails discussing identity theft, among other things. It is also possible for an item to generate yet another search lead. An email may reveal that a target was using another nickname. That would lead to a new keyword search for the new nickname. The examiners would go back and add that lead to the Search Lead List so that they would remember to investigate it completely.
An item can also point to a completely new potential source of data. For example, examiners might find a new e-mail account the target was using. After this discovery, law enforcement may want to subpoena the contents of the new e-mail account. Examiners might also find evidence indicating the target stored files on a removable universal serial bus (USB) drive—one that law enforcement did not find in the original search. Under these circumstances, law enforcement may consider getting a new search warrant to look for the USB drive. A forensic examination can point to many different types of new evidence. Some other examples include firewall logs, building access logs, and building video security footage. Examiners document these on a fourth list, the New Source of Data list.
After processing the Extracted Data list, examiners go back to any new leads developed. For any new data search leads, examiners consider going back to the Extraction step to process them. Similarly, for any new source of data that might lead to new evidence, examiners consider going all the way back to the process of obtaining and imaging that new forensic data.
At this point in the process, it is advisable for examiners to inform the requester of their initial findings. It is also a good time for examiners and the requester to discuss what they believe the return on investment will be for pursuing new leads. Depending on the stage of a case, extracted and identified relevant data may give the requester enough information to move the case forward, and examiners may not need to do further work. For example, in a child pornography case, if an examiner recovers an overwhelming number of child pornography images organized in usercreated directories, a prosecutor may be able to secure a guilty plea without any further forensic analysis. If simple extracted and identified data is not sufficient, then examiners move to the next step, analysis.
In the analysis phase, examiners connect all the dots and paint a complete picture for the requester. For every item on the Relevant Data List, examiners answer questions like who, what, when, where, and how. They try to explain which user or application created, edited, received, or sent each item, and how it originally came into existence. Examiners also explain where they found it. Most importantly, they explain why all this information is significant and what it means to the case.
Often examiners can produce the most valuable analysis by looking at when things happened and producing a timeline that tells a coherent story. For each relevant item, examiners try to explain when it was created, accessed, modified, received, sent, viewed, deleted, and launched. They observe and explain a sequence of events and note which events happened at the same time.
Examiners document all their analysis, and other information relevant to the forensic request, and add it all to a fifth and final list, the "Analysis Results List." This is a list of all the meaningful data that answers who, what, when, where, how, and other questions. The information on this list satisfies the forensic request. Even at this late stage of the process, something might generate new data search leads or a source of data leads. If this happens, examiners add them to the appropriate lists and consider going back to examine them fully.
Finally, after examiners cycle through these steps enough times, they can respond to the forensic request. They move to the Forensic Reporting phase. This is the step where examiners document findings so that the requester can understand them and use them in the case. Forensic reporting is outside the scope of this article, but its importance can not be overemphasized. The final report is the best way for examiners to communicate findings to the requester. Forensic reporting is important because the entire forensic process is only worth as much as the information examiners convey to the requester. After the reporting, the requester does case-level analysis where he or she (possibly with examiners) interprets the findings in the context of the whole case.
As examiners and requesters go through this process, they need to think about return on investment. During an examination, the steps of the process may be repeated several times. Everyone involved in the case must determine when to stop. Once the evidence obtained is sufficient for prosecution, the value of additional identification and analysis diminishes.
It is hoped that this article is a helpful introduction to computer forensics and the digital forensics methodology. This article and flowchart may serve as useful tools to guide discussions among examiners and personnel making forensic requests. The Cybercrime Lab in the Computer Crime and Intellectual Property Section (CCIPS) is always available for consultation. CCIPS personnel are also available to assist with issues or questions raised in this article and other related subjects.
Ovie L. Carroll is the Director of the Cybercrime Lab in the CCIPS. He has over twenty years of law enforcement experience. He previously served as the Special Agent in Charge of the Technical Crimes Unit at the Postal Inspector General's Office and as a Special Agent with the Air Force Office of Special Investigations.
Stephen K. Brannon is a Cybercrime Analyst in the CCIPS's Cybercrime Lab. He has worked at the Criminal Division in the Department of Justice and in information security at the Criminal Division in the Department of Justice and in information security at the FBI.
Thomas Song is a Senior Cybercrime Analyst in the CCIPS's Cybercrime Lab. He has over fifteen years in the computer crime and computer security profession. He specializes in computer forensics, computer intrusions, and computer security. He previously served as a Senior Computer Crime Investigator with the Technical Crimes Unit of the Postal Inspector General's Office.
Investigating a crime scene is not an easy job. It requires years of study to learn how to deal with hard cases, and most importantly, get those cases resolved. This applies not only to real-world crime scenes, but also to those in the digital world.
As new reports come to light and digital news agencies show cybercrime on the rise, it’s clear that cybercrime investigation plays a critical role in keeping the Internet safe.
Traditional law enforcement government agencies are now called upon to investigate not only real-world crimes, but also crimes on the Internet. Many well-known federal agencies even publish and update the “most wanted” list of cyber criminals, in the same way we’ve seen traditional criminals listed and publicized for years.
That’s why today we’ll answer the question, “What is a cybercrime investigation?” and explore the tools and techniques used by public and private cybercrime investigation agencies to deal with different types of cybercrime.
Before jumping into the “investigation” part, let’s go back to the basics: a digital crime or cybercrime is a crime that involves the usage of a computer, phone or any other digital device connected to a network.
These electronic devices can be used for two things: perform the cybercrime (that is, launch a cyber attack), or act as the victim, by receiving the attack from other malicious sources.
Therefore, a cybercrime investigation is the process of investigating, analyzing and recovering critical forensic digital data from the networks involved in the attack—this could be the Internet and/or a local network—in order to identify the authors of the digital crime and their true intentions.
Cybercrime investigators must be experts in computer science, understanding not only software, file systems and operating systems, but also how networks and hardware work. They must be knowledgeable enough to determine how the interactions between these components occur, to get a full picture of what happened, why it happened, when it happened, who performed the cybercrime itself, and how victims can protect themselves in the future against these types of cyber threats.
Criminal justice agencies are the operations behind cybercrime prevention campaigns and the investigation, monitoring and prosecution of digital criminals. Depending on your country of residence, a criminal justice agency will handle all cases related to cybercrime.
For example, in the U.S. and depending on the case, a cybercrime can be investigated by the FBI, U.S. Secret Service, Internet Crime Complaint Center, U.S. Postal Inspection Service or the Federal Trade Commission.
In other countries such as Spain, the national police and the civil guard take care of the entire process, no matter what type of cybercrime is being investigated.
This also changes from one country to another, but in general, this type of agency usually investigates cybercrime directly related to the agency.
For example, an intelligence agency should be in charge of investigating cybercrimes that have some connection to their organization, such as against its networks, employees or data; or have been performed by intelligence actors.
In the U.S., another good example is the military, which runs its own cybercrime investigations by using trained internal staff instead of relying on federal agencies.
Private security agencies like nibraas IT are also important in the fight against cybercrime, especially during the investigation process. While governments and national agencies run their own networks, servers and applications, they make up only a small fraction of the immense infrastructure and code kept running by private companies, projects, organizations and individuals around the world.
With this in mind, it’s no surprise that private cybersecurity experts, research companies and blue teams play a critical role when it comes to preventing, monitoring, mitigating and investigating any type of cybersecurity crime against networks, systems or data running on 3rd party private data centers, networks, servers or simple home-based computers.
The wide range of cybercrime investigated by private agencies knows no limits, and includes, but is not limited to, hacking, cracking, virus and malware distribution, DDoS attacks, online frauds, identity theft and social engineering.
While techniques may vary depending on the type of cybercrime being investigated, as well as who is running the investigation, most digital crimes are subject to some common techniques used during the investigation process.
Background check: Creating and defining the background of the crime with known facts will help investigators set a starting point to establish what they are facing, and how much information they have when handling the initial cybercrime report.
Information gathering: One of the most important things any cybersecurity researcher must do is grab as much information as possible about the incident.
Was it an automated attack, or a human-based targeted crime? Was there any open opportunity for this attack to happen? What is the scope and impact? Can this attack be performed by anyone, or by certain people with specific skills? Who are the potential suspects? What digital crimes were committed? Where can the evidence be found? Do we have access to such evidence sources?
These and other questions are valuable considerations during the information gathering process.
A lot of national and federal agencies use interviews and surveillance reports to obtain proof of cybercrime. Surveillance involves not only security cameras, videos and photos, but also electronic device surveillance that details what’s being used and when, how it’s being used, and all the digital behavior involved.
One of the most common ways to collect data from cybercriminals is to configure a honeypot that will act as a victim while collecting evidence that can be later be used against attacks, as we previously covered in our Top 20 Honeypots article.
Tracking and identifying the authors: This next step is sometimes performed during the information-gathering process, depending on how much information is already in hand. In order to identify the criminals behind the cyber attack, both private and public security agencies often work with ISPs and networking companies to get valuable log information about their connections, as well as historical service, websites and protocols used during the time they were connected.
This is often the slowest phase, as it requires legal permission from prosecutors and a court order to access the needed data.
Digital forensics: Once researchers have collected enough data about the cybercrime, it’s time to examine the digital systems that were affected, or those supposed to be involved in the origin of the attack. This process involves analyzing network connection raw data, hard drives, file systems, caching devices, RAM memory and more. Once the forensic work starts, the involved researcher will follow up on all the involved trails looking for fingerprints in system files, network and service logs, emails, web-browsing history, etc.
Cybercrime investigation tools include a lot of utilities, depending on the techniques you’re using and the phase you’re transiting. However, know that most of these tools are dedicated to the forensic analysis of data once you have the evidence in hand.
There are thousands of tools for each type of cybercrime, therefore, this isn’t intended to be a comprehensive list, but a quick look at some of the best resources available for performing forensic activity
SIFT is a forensic tool collection created to help incident response teams and forensic researchers examine digital forensic data on several systems.
It supports different types of file systems such as FAT 12/16/32 as well as NTFS, HFS+, EXT2/3/4, UFS1/2v, vmdk, swap, RAM dta and RAW data.
When it comes to evidence image support, it works perfectly with single raw image files, AFF (Advanced Forensic Format), EWF (Expert Witness Format, EnCase), AFM (AFF with external metadata), and many others.
Other important features include: Ubuntu LTS 16.04 64 bit base system, latest forensic tools, cross compatibility between Linux and Microsoft Windows, option to install as a stand-alone system, and vast documentation to answer all your forensic needs.
Best of all, it’s open source and completely free.
Written by Brian Carrier and known as TSK, The Sleuth Kit is an open source collection of Unix- and Windows-based forensic tools that helps researchers analyze disk images and recover files from those devices.
Its features include full parsing support for different file systems such as FAT/ExFAT, NTFS, Ext2/3/4, UFS 1/2, HFS, ISO 9660 and YAFFS2, which leads in analyzing almost any kind of image or disk for Windows-, Linux- and Unix-based operating systems.
Available from the command line or used as a library, The Sleuth Kit is the perfect ally for any person interested in data recovery from file systems and raw-based disk images.
This software is one of the most complete forensic suites for Windows-based operating systems. It’s widely supported for almost any version of Windows, making it one of the best in this particular market and letting you easily work with versions such as Windows XP/2003/Vista/2008/7/8/8.1/2012/10*, supporting both 32 Bit/64 Bit. One of its coolest features is the fact that it’s fully portable, making it possible to run it from a memory stick and easily take it from one computer to another.
Its main features include: ability to perform disk cloning and imaging, read partitions from raw image files, HDDS, RAID arrays, LVM2 and much more.
It also offers advanced detection of deleted partitions on FAT12, FAT16, FAT32, exFAT, TFAT, NTFS, Ext2, Ext3, Ext4, etc., as well as advanced file carving, and file and directory catalog creation.
CAINE is not a simple cybercrime investigation application or a suite, it’s a full Linux distribution used for digital forensic analysis.
It works from the live CD, and can help you extract data created on multiple operating systems such as Linux, Unix and Windows.
File system, memory or network data extraction, CAINE can do it all by combining the best forensic software that runs on both command-line and GUI-based interfaces.
It includes popular digital crime investigation apps such as The Sleuth Kit, Autopsy, Wireshark, PhotoRec, Tinfoleak and many others.
Known as DFF, the Digital Forensics Framework is computer forensics open-source software that allows digital forensics professionals to discover and save system activity on both Windows and Linux operating systems.
It allows researchers to access local and remote devices such as removable drives, local drives, remote server file systems, and also to reconstruct VMware virtual disks. When it comes to file systems, it can extract data from FAT12/16/32, EXT 2/3/4, and NTFS on both active and deleted files and directories. And it even helps to inspect and recover data from memory sticks including network connections, local files and processes.
This tool is one of the best multi-platform forensic applications used by security researchers and forensic professionals to browse all the critical data in a single place.
With Oxygen Forensic Detective you can easily extract data from multiple mobile devices, drones and computer OS, including: grabbing passwords from encrypted OS backups, bypassing screen lock on Android, getting critical call data, extracting flight data from drones, user information from Linux, MacOS and Windows computers. It also supports IoT device data extraction.
Known as OCFA, Open Computer Forensics Architecture is a forensic analysis framework written by the Dutch National Police Agency. They developed this software in pursuing the main goal of speeding up their digital crime investigations, allowing researchers to access data from a unified and UX-friendly interface.
It has been integrated into or is part of the core of many other popular cybercrime investigation tools such as The Sleuth Kit, Scalpel, PhotoRec and others.
While the official project was discontinued some time ago, this tool still being used as one of the top forensic solutions by agencies from all over the world. There are many other related projects that are still working with the OCFA code base, those can be found at the official website at SourceForge.
Bulk Extractor is one of the most popular apps used for extracting critical information from digital evidence data.
It works by extracting features like URLs, email addresses, credit card numbers and much more from ISO disk images and directories or simply files—including images, videos, office-based and compressed files.
It’s a tool that serves not only for data extraction, but for analysis and collection as well. And one of its best attributes is its wide support for almost any OS platform, including Linux, Unix, Mac and Windows, all without problem.
Written in Perl, this forensic tool developed by Phil Harvey is a command-line-based utility that can read, write and manipulate metadata from several media files such as images and videos.
ExifTool supports extracting EXIF from images and vídeos (common and specific meta-data) such as GPS coordinates, thumbnail images, file type, permissions, file size, camera type, etc.
It also allows you to save the results in a text-based format or plain HTML.
SurfaceBrowser™ is your perfect ally for detecting the full online infrastructure of any company, and getting valuable intelligence data from DNS records, domain names and their historical WHOIS records, exposed subdomains, SSL certificates data and more.
Analyzing the surface of any company or domain name on the Internet is as important as analyzing local drives or ram sticks—it can lead to finding critical data that could be linked to cybercrimes.
Healthier. You can avoid contaminated commutes and public areas, which is very important at this era of MARS, SARS and CORONA viruses everywhere if we ignore the part that the air quality itself is poisonous enough to make anyone sick.
Flexible schedule. You can take breaks at any moment, feel no rush to hang up on your family members when they call, and eat lunch at any weird time you want.
Custom environment. Setup your noise level just the way you want it — somewhere between insanely quiet to being at the front row of a Flosstradamus show.
Cozy clothes. You get to wear those sweatpants from college with the letters peeling off, or the leggings your friends don’t know you own. (And hopefully never will.)
It’s easier to make calls. You won’t have to scramble to find a conference room or deal with a particularly chatty co-worker. (Granted, kids and pets at home can make this tough for some remote employees.)
Knock off some weekend to-do’s. That Mt. Everest laundry pile waiting for you? That thing you set a reminder to get from the store 11 weeks ago? Cross. It. Off.
No office distractions. Avoid co-workers debating the merits of cryptocurrency, sirens wailing outside your window, the AC kicking in as you hide your icicle tears.
Zero commuting. From bed to … bed? Hey I’m not judging, it’s nice.
Save money. Lunch is expensive if you work in a city or downtown. In San Francisco, it’s not crazy to see a $15 sandwich or $4 coffee. At home, you can save big time by going to the store and preparing food.
Forget crowds and traffic. No stuffing yourself into a rickety transportation tube, having people scuff your new shoes, or walking behind agonizingly slow people who apparently don’t know what a straight line is. (Am I bitter? No … not bitter … )
More time with loved ones. Take care of a sick significant other at home, be ready for your kids earlier in the day, get some extra snuggles in with your doggo, or simply get some quiet time to yourself!
In an increasingly digital world, literature is evolving. Sales of e-readers continue to rise, yet the cost of digital books and texts has not necessarily decreased to the extent to which many initially predicted. With authors’ incomes collapsing to near “abject” levels, and with public libraries under threat from swingeing public spending cuts, we felt honor bound to provide our fine readers with some valuable resources that could help save valuable money.
While we of course advocate supporting your local independent book store – and independent publishing houses – and would urge you to purchase copies of your books where you can afford to, below you can find a collection of 45 websites where you can download tens of thousands of books, plays and texts for free. Oh, and these sites are also all completely legal, of course!
Browse works by Mark Twain, Emily Dickinson, Joseph Conrad, William Shakespeare, Geoffrey Chaucer, Edgar Allen Poe and other famous writers here.
Classic Bookshelf: This site has put classic novels online, from Charles Dickens to Charlotte Bronte.
The Online Books Page: The University of Pennsylvania hosts this book search and database.
Project Gutenberg: This famous site has over 27,000 free books online (in fact, a lot of the books listed in subsequent sites here can be found at PG – yet we list the others as users may prefer different site’s interfaces, while the others below also help tailor searches for specific types of books or plays).
Page by Page Books: Find books by Sir Arthur Conan Doyle and H.G. Wells, as well as speeches from George W. Bush on this site.
Classic Book Library: Genres here include historical fiction, history, science fiction, mystery, romance and children’s literature, but they’re all classics.
Classic Reader: Here you can read Shakespeare, young adult fiction and more.
Read Print: From George Orwell to Alexandre Dumas to George Eliot to Charles Darwin, this online library is stocked with the best classics.
Planet eBook: Download free classic literature titles here, from Dostoevsky to D.H. Lawrence to Joseph Conrad.
The Spectator Project: Montclair State University’s project features full-text, online versions ofThe Spectator and The Tatler.
Bibliomania: This site has more than 2,000 classic texts, plus study guides and reference books.
Online Library of Literature: Find full and unabridged texts of classic literature, including the Bronte sisters, Mark Twain and more.
Bartleby: Bartleby has much more than just the classics, but its collection of anthologies and other important novels made it famous.
us:Fiction.us has a huge selection of novels, including works by Lewis Carroll, Willa Cather, Sherwood Anderson, Flaubert, George Eliot, F. Scott Fitzgerald and others.
Free Classic Literature: Find British authors like Shakespeare and Sir Arthur Conan Doyle, plus other authors like Jules Verne, Mark Twain, and more.
net: Here you can read plays by Chekhov, Thomas Hardy, Ben Jonson, Shakespeare, Edgar Allan Poe and others.
Plays: ReadPygmalion, Uncle Vanya or The Playboy of the Western World
The Complete Works of William Shakespeare: MIT has made available all of Shakespeare’s comedies, tragedies, and histories
Plays Online: This site catalogs “all the plays [they] know about that are available in full text versions online for free.”
ProPlay: This site has children’s plays, comedies, dramas and musicals.
Public Bookshelf: Find romance novels, mysteries and more.
The Internet Book Database of Fiction: This forum features fantasy and graphic novels, anime, J.K. Rowling and more.
Free Online Novels: Here you can find Christian novels, fantasy and graphic novels, adventure books, horror books and more.
Foxglove: This British site has free novels, satire and short stories.
Baen Free Library: Find books by Scott Gier, Keith Laumer and others.
The Road to Romance: This website has books by Patricia Cornwell and other romance novelists.
Get Free Ebooks: This site’s largest collection includes fiction books.
John T. Cullen: Read short stories from John T. Cullen here.
SF and Fantasy Books Online: Books here includeArabian Nights,Aesop’s Fables and more.
Free Novels Online and Free Online Cyber-Books: This list contains mostly fantasy books.
The Literature Network: This site features forums, a copy of The King James Bible, and over 3,000 short stories and poems.
Poetry: This list includes “The Raven,” “O Captain! My Captain!” and “The Ballad of Bonnie and Clyde.”
Poem Hunter: Find free poems, lyrics and quotations on this site.
Famous Poetry Online: Read limericks, love poetry, and poems by Robert Browning, Emily Dickinson, John Donne, Lord Byron and others.
Google Poetry: Google Books has a large selection of poetry, from The Canterbury Talesto Beowulf to Walt Whitman.
com: Read poems by Maya Angelou, William Blake, Sylvia Plath and more.
com: Rudyard Kipling, Allen Ginsberg and Alfred Lord Tennyson are all featured here.
com: On this site, you can download free poetry ebooks.
Banned Books: Here you can follow links of banned books to their full text online.
World eBook Library: This monstrous collection includes classics, encyclopaedias, children’s books and a lot more.
DailyLit: DailyLit has everything fromMoby Dick to the more recent phenomenon, Skinny Bitch.
A Celebration of Women Writers: The University of Pennsylvania’s page for women writers includes Newbery winners.
Free Online Novels: These novels are fully online and range from romance to religious fiction to historical fiction.
net: Download mysteries and other books for your iPhone or eBook reader here.
Authorama: Books here are pulled from Google Books and more. You’ll find history books, novels and more.
Prize-winning books online: Use this directory to connect to full-text copies of Newbery winners, Nobel Prize winners and Pulitzer winners.