Thursday, December 20, 2007

Wireless Networks and Managed Services

Everybody's Doing IT

Salang - part of the country-wide network in AfghanistanConverged networks have finally come of age, and users are realising that connectivity for voice, data or video by means of proven wireless technology offers the most reliable and cost-effective implementation.

Whether it be the simplest of building to building links, or the most extensive carrier-class wide area networks, wireless provides the highest availability solutions for voice, video and broadband data networks.

SIAE Groupe Pathfinder offers the most comprehensive and professional set of turnkey wireless services from design through delivery and support. The combination of a global microwave manufacturer plus a tried and trusted range of turnkey wireless services ranging from design consultancy through project management, build and support, offers a truly differentiated offer.

With over 14,000 links located in the UK and many more overseas, our extensive experience is your assurance of quality - why not call us now for a free quotation.

SIAE Groupe Pathfinder

Groupe Pathfinder Ltd., the UK Wireless Service provider trading since 1997, was recently acquired by SIAE Microelettronica, the global microwave equipment manufacturer based in Milan Italy. As a result, a number of integration and investment activities were initiated in order to bring a number of very positive advantages to the UK based Services company.

SIAE Groupe Pathfinder (SIAE GPL) is now an integration of two parts - Groupe Pathfinder Ltd and the parent company SIAE Microelettronica, formerly selling equipment in UK mainly in the Operator segment since 1986. As a single entity, the company now offers the strongest combination of products and services - whether this be a single radio link delivered to any destination in the world, 10,000 links delivered to one operator, a serviced installation of communication links, right up to a full Turnkey Design, Build and Support service for a country-wide Wide Area Network.

For further detail, please click here.

SIAE Complete Network Solutions

SIAE GP can provide the most comprehensive of wireless network solutions.

These may comprise solutions selected eniterely from the SIAE range, from a mix of SIAE and SIAE partners, or not using SIAE product at all - what is critical to us at SIAE GP is that the customer is provided with the most approriate solution to the requirement.

The example below shows a full solution based on product from SIAE. This shows in some depth the extent of the company capability.

Example SIAE Network

Example Wide Area Network

The diagram below shows a typical wireless WAN solution from SIAE GPL, one of many installed over the last few years.

Working from a list of customer site locations, and a description of the service required at each location, our Sales and Design Teams will be able to quickly provide detailed solutions and discuss the options with you.

Customer Wide Area Network

Why not contact our Sales Team now, so that we can start to help with your own unique challenges.

Once we have validated your enquiry, you will find that we are willing to work with you in detail to help ensure the achievement of a rationalised requirement leading to an optimised solution.

You can be confident that you will end up paying no more than you need and receive an installed and supported solution that delivers great service with significant benefits. Contact our Customer Team now.

komputer 9

Wide Area Networks

Wide Area Network CNS strength is in its ability to work with customers to come up with the right solution to their computing problems.

In working for a local city in Miramar with satellite locations, CNS can illustrate how a number of geographically separated locations can be linked together to allow them to share information immediately, solving internal communication challenges. Whether it's connecting branch offices locally or across the globe, CNS can design and implement the best performing, most cost effective network system to fit your business needs.

At CNS we understand that every customer has unique needs. The customer is always part of the design process from start to finish, with our engineers educating you every step of the way.


komputer 9

Wide Area Networks

Wide Area Network CNS strength is in its ability to work with customers to come up with the right solution to their computing problems.

In working for a local city in Miramar with satellite locations, CNS can illustrate how a number of geographically separated locations can be linked together to allow them to share information immediately, solving internal communication challenges. Whether it's connecting branch offices locally or across the globe, CNS can design and implement the best performing, most cost effective network system to fit your business needs.

At CNS we understand that every customer has unique needs. The customer is always part of the design process from start to finish, with our engineers educating you every step of the way.


komputer 8

Display Boards: Networking

Clients and Servers on a LAN

The emphasis here was that the future of computing would not be stand-alone systems but workstations connected by a Local Area Network to a set of specialist servers. At this time there were no standards for Local Area Networks and the one being pushed in the UK was the Cambridge Ring. No standards for ethernets existed and the view in the UK was that the Cambridge Ring was a better technology.

The Servers shown include SERC's CRAY at Daresbury and a satellite gateway (RAL has satellite connections to CERN, locally in the UK and elsewhere.

ICL had developed a massively parallel SIMD architecture called the Distributed Array Processor (DAP). There was a proposal from RAL that ICL should integrate a mini DAP with the PERQ, and have the memory of the DAP being the memory buffer for the display.

Clients and Servers, March 1981

Clients and Servers, March 1981
Large View

Cambridge Ring

The Cambridge Data Ring was currently the standard Local Area Network technology in the UK. The Joint Network Team responsible for network developments in the UK academic sector were putting money into local development projects associated with the Cambridge Ring as was the Department of Industry. With hindsight, going for ethernet rather than Cambridge Ring, even though it was an inferior technology, would have been better. It mirrors the UK's support for ISO protocols long after TCP/IP had become the de facto standard.

The Cambridge Ring consisted of a set of stations that passed messages around the ring with the receiver taking off the messages for it and passing on the rest. Cambridge Rings could cover larger areas than the equivalent ethernet technology and there was no contention.

Cambridge  Ring, March 1981

Cambridge Ring, March 1981
Large View

KOMPUTER 6

Science-Driven Systems

NERSC deploys five major types of hardware systems—computational systems, filesystems, storage systems, network, and analytics and visualization systems—all of which must be carefully designed and integrated to maximize productivity for the entire computational community supported by the DOE Office of Science. In 2005 NERSC implemented major upgrades or improvements in all five categories, as described below.

System specifications must meet requirements that are constantly evolving because of technological progress and the changing needs of scientists. Therefore, NERSC is constantly planning for the future, not just by tracking trends in science and technology and planning new system procurements, but also by actively influencing the direction of technological development through efforts such as the Science-Driven System Architecture collaborations.

Two New Clusters: Jacquard and Bassi

In August 2005 NERSC accepted a 722-processor Linux Networx Evolocity cluster system named “Jacquard” for full production use (Figure 7). The acceptance test included a 14-day availability test, during which a select group of NERSC users were given full access to the Jacquard cluster to thoroughly test the entire system in production operation. Jacquard had a 99 percent availability uptime during the testing while users and scientists ran a variety of codes and jobs on the system.

Figure 7. Jacquard is a 722-processor Linux Networx Evolocity cluster system with a theoretical peak performance of 2.8 teraflop/s.

The Jacquard system is one of the largest production InfiniBand-based Linux cluster systems and met rigorous acceptance criteria for performance, reliability, and functionality that are unprecedented for an InfiniBand cluster. Jacquard is the first system to deploy Mellanox 12x InfiniBand uplinks in its fat-tree interconnect, reducing network hot spots and improving reliability by dramatically reducing the number of cables required.

The system has 640 AMD 2.2 GHz Opteron processors devoted to computation, with the rest used for I/O, interactive work, testing, and interconnect management. Jacquard has a peak performance of 2.8 teraflop/s. Storage from DataDirect Networks provides 30 TB of globally available formatted storage.

Following the tradition at NERSC, the system was named for someone who has had an impact on science or computing. In 1801, Joseph-Marie Jacquard invented the Jacquard loom, which was the first programmable machine. The Jacquard loom used punched cards and a control unit that allowed a skilled user to program detailed patterns on the loom.

In January 2006, NERSC launched an 888-processor IBM cluster named “Bassi” into production use (Figure 8). Earlier, during the acceptance testing, users reported that codes ran from 3 to 10 times faster on Bassi than on NERSC’s other IBM supercomputer, Seaborg, leading one tester to call the system the “best machine I have seen.”

Figure 8. Bassi is an 888-processor IBM p575 POWER5 system with a theoretical peak performance of 6.7 teraflop/s.

Bassi is an IBM p575 POWER5 system, and each processor has a theoretical peak performance of 7.6 gigaflop/s. The processors are distributed among 111 compute nodes with eight processors per node. Processors on each node have a shared memory pool of 32 GB. A Bassi node is an example of a shared memory processor, or SMP.

The compute nodes are connected to each other with a high-bandwidth, low-latency switching network. Each node runs its own full instance of the standard AIX operating system. The disk storage system is a distributed, parallel I/O system called GPFS (IBM’s General Parallel File System). Additional nodes serve exclusively as GPFS servers. Bassi’s network switch is the IBM “Federation” HPS switch, which is connected to a two-link network adapter on each node.

One of the test users for NERSC’s two new clusters was Robert Duke of the University of North Carolina, Chapel Hill, the author of the PMEMD code, which is the parallel workhorse in modern versions of the popular chemistry code AMBER. PMEMD is widely used for molecular dynamics simulations and is also part of NERSC’s benchmark applications suite. Duke has worked with NERSC’s David Skinner to port and improve the performance of PMEMD on NERSC systems.

“I have to say that both of these machines are really nothing short of fabulous,” Duke wrote to Skinner. “While Jacquard is perhaps the best-performing commodity cluster I have seen, Bassi is the best machine I have seen, period.”

Other early users during the acceptance testing included the INCITE project team “Direct Numerical Simulation of Turbulent Nonpremixed Combustion.” “Our project required a very long stretch of using a large fraction of Bassi processors—512 processors for essentially an entire month,” recounted Evatt Hawkes. “During this period we experienced only a few minor problems, which is exceptional for a pre-production machine, and enabled us to complete our project against a tight deadline. We were very impressed with the reliability of the machine.”

Hawkes noted that their code also ported quickly to Bassi, starting with a code already ported to Seaborg’s architecture. “Bassi performs very well for our code. With Bassi’s faster processors we were able to run on far fewer processors (512 on Bassi as opposed to 4,096 on Seaborg) and still complete the simulations more rapidly,” Hawkes added. “Based on scalar tests, it is approximately 7 times faster than Seaborg and 1½ times faster than a 2.0 GHz Opteron processor. Also, the parallel efficiency is very good. In a weak scaling test, we obtained approximately 78 percent parallel efficiency using 768 processors, compared with about 70 percent on Seaborg.”

The machine is named in honor of Laura Bassi, a noted Newtonian physicist of the eighteenth century. Appointed a professor at the University of Bologna in 1731, Bassi was the first woman to officially teach at a European university.

New Visual Analytics Server: DaVinci

In mid-August, NERSC put into production a new server specifically tailored to data-intensive visualization and analysis. The 32-processor SGI Altix, called DaVinci (Figure 9), offers interactive access to large amounts of large memory and high performance I/O capabilities well suited for analyzing large-scale data produced by the NERSC high performance computing systems (Bassi, Jacquard, and Seaborg).

With its 192 gigabytes (GB) of RAM and 25 terabytes (TB) of disk, DaVinci’s system balance is biased toward memory and I/O, which is different from the other systems at NERSC. This balance favors data-intensive analysis and interactive visualization. DaVinci has 6 GB of memory per processor, compared to 4 GB per processor on Jacquard and Bassi and 1 GB on Seaborg.

Users can obtain interactive access to 80 GB of memory from a single application (or all 192 GB of memory by prior arrangement), whereas the interactive limits on production NERSC supercomputing systems restrict interactive tasks to a smaller amount of memory (256 MB on login nodes). While DaVinci is available primarily for interactive use, the system is also configured to run batch jobs, especially those jobs that are data intensive.

Figure 9. DaVinci is a 32-processor SGI Altix with 6 GB of memory per processor and 25 TB of disk memory, a configuration designed for data-intensive analysis and interactive visualization.

The new server runs a number of visualization, statistics, and mathematics applications, including IDL, AVS/Express, CEI Ensight, VisIT (a parallel visualization application from Lawrence Livermore National Laboratory), Maple, Mathematica, and MatLab. Many users depend on IDL and MatLab to process or reorganize data in preparation for visualization. The large memory is particularly beneficial for these types of jobs.

DaVinci is connected to the NERSC Global Filesystem (see below), High Performance Storage System (HPSS), and ESnet networks by two independent 10 gigabit Ethernet connections.

With DaVinci now in production, NERSC has retired the previous visualization server, Escher, and the math server, Newton.

NERSC Global Filesystem

In early 2006, NERSC deployed the NERSC Global Filesystem (NGF) into production, providing seamless data access from all of the Center’s computational and analysis resources. NGF is intended to facilitate sharing of data between users and/or machines. For example, if a project has multiple users who must all access a common set of data files, NGF provides a common area for those files. Alternatively, when sharing data between machines, NGF eliminates the need to copy large datasets from one machine to another. For example, because NGF has a single unified namespace, a user can run a highly parallel simulation on Seaborg, followed by a serial or modestly parallel post-processing step on Jacquard, and then perform a data analysis or visualization step on DaVinci—all without having to explicitly move a single data file.

NGF’s single unified namespace makes it easier for users to manage their data across multiple systems (Figure 10). Users no longer need to keep track of multiple copies of programs and data, and they no longer need to copy data between NERSC systems for pre- and post-processing. NGF provides several other benefits as well: storage utilization is more efficient because of decreased fragmentation; computational resource utilization is more efficient because users can more easily run jobs on an appropriate resource; NGF provides improved methods of backing up user data; and NGF improves system security by eliminating the need for collaborators to use “group” or “world” permissions.

“NGF stitches all of our systems together,” said Greg Butler, leader of the NGF project. “When you go from system to system, your data is just there. Users don’t have to manually move their data or keep track of it. They can now see their data simultaneously and access the data simultaneously.”


Figure 10. NGF is the first production global filesystem spanning five platforms (Seaborg, Jacquard, PDSF, DaVinci and Bassi), three architectures, and four different vendors.

NERSC staff began adding NGF to computing systems in October 2005, starting with the DaVinci visualization cluster and finishing with the Seaborg system in December. To help test the system before it entered production, a number of NERSC users were given preproduction access to NGF. Early users helped identify problems with NGF so they could be addressed before the filesystem was made available to the general user community.

“I have been using the NGF for some time now, and it’s made my work a lot easier on the NERSC systems,” said Martin White, a physicist at Berkeley Lab. “I have at times accessed files on NGF from all three compute platforms (Seaborg, Jacquard, and Bassi) semi-simultaneously.”

NGF also makes it easier for members of collaborative groups to access data, as well as ensure data consistency by eliminating multiple copies of critical data. Christian Ott, a Ph.D. student and member of a team studying core-collapse supernovae, wrote that “the project directories make our collaboration much more efficient. We can now easily look at the output of the runs managed by other team members and monitor their progress. We are also sharing standard input data for our simulations.”

NERSC General Manager Bill Kramer said that as far as he knows, NGF is the first production global filesystem spanning five platforms (Seaborg, Bassi, Jacquard, DaVinci, and PDSF), three architectures, and four different vendors. While other centers and distributed computing projects such as the National Science Foundation’s TeraGrid may also have shared filesystems, Butler said he thinks NGF is unique in its heterogeneity.

The heterogeneous approach of NGF is a key component of NERSC’s five-year plan. This approach is important because NERSC typically procures a major new computational system every three years, then operates it for five years to support DOE research. Consequently, NERSC operates in a heterogeneous environment with systems from multiple vendors, multiple platforms, different system architectures, and multiple operating systems. The deployed filesystem must operate in the same heterogeneous client environment throughout its lifetime.

Butler noted that the project, which is based on IBM’s proven GPFS technology (in which NERSC was a research partner), started about five years ago. While the computing systems, storage, and interconnects were mostly in place, deploying a shared filesystem among all the resources was a major step beyond a parallel filesystem. In addition to the different system architectures, there were also different operating systems to contend with. However, the last servers and storage have now been deployed. To keep everything running and ensure a graceful shutdown in the event of a power outage, a large uninterruptible power supply has been installed in the basement of the Oakland Scientific Facility.

While NGF is a significant change for NERSC users, it also “fundamentally changes the Center in terms of our perspective,” Butler said. For example, when the staff needs to do maintenance on the filesystem, the various groups need to coordinate their efforts and take all the systems down at once.

Storage servers, accessing the consolidated storage using the shared-disk filesystems, provide hierarchical storage management, backup, and archival services. The first phase of NGF is focused on function and not raw performance, but in order to be effective, NGF has to have performance comparable to native cluster filesystems. The current capacity of NGF is approximately 70 TB of user-accessible storage and 50 million inodes (the data structures for individual files). Default project quotas are 1 TB and 250,000 inodes. The system has a sustainable bandwidth of 3 GB/sec bandwidth for streaming I/O, although actual performance for user applications will depend on a variety of factors. Because NGF is a distributed network filesystem, performance will be only slightly less than that of filesystems that are local to NERSC compute platforms. This should only be an issue for applications whose performance is I/O bound.

NGF will grow in both capacity and bandwidth over the next several years, eventually replacing or dwarfing the amount of local storage on systems. NERSC is also working to seamlessly integrate NGF with the HPSS data archive to create much larger “virtual” data storage for projects. Once NGF is completely operational within the NERSC facility, Butler said, users at other centers, such as the National Center for Atmospheric Research and NASA Ames Research Center, could be allowed to remotely access the NERSC filesystem, allowing users to read and visualize data without having to execute file transfers. Eventually, the same capability could be extended to experimental research sites, such as accelerator labs.

NGF was made possible by IBM’s decision to make its GPFS software available across mixed-vendor supercomputing systems. This strategy was a direct result of IBM’s collaboration with NERSC. “Thank you for driving us in this direction,” wrote IBM Federal Client Executive Mike Henesy to NERSC General Manager Bill Kramer when IBM announced the project in December 2005. “It’s quite clear we would never have reached this point without your leadership!”

NERSC’s Mass Storage Group collaborated with IBM and the San Diego Supercomputer Center to develop a Hierarchical Storage Manager (HSM) that can be used with IBM’s GPFS. The HSM capability with GPFS provides a recoverable GPFS filesystem that is transparent to users and fully backed up and recoverable from NERSC’s multi-petabyte archive on HPSS. GPFS and HPSS are both cluster storage software: GPFS is a shared disk filesystem, while HPSS supports both disk and tape, moving less-used data to tape while keeping current data on disk.

One of the key capabilities of the GPFS/HPSS HSM is that users’ files are automatically backed up on HPSS as they are created. Additionally, files on the GPFS which have not been accessed for a specified period of time are automatically migrated from online resources as space is needed by users for files currently in use. Since the purged files are already backed up on HPSS, they can easily be automatically retrieved by users when needed, and the users do not need to know where the files are stored to access them. “This gives the user the appearance of almost unlimited disk storage space without the cost,” said NERSC’s Mass Storage Group Leader Nancy Meyer.

This capability was demonstrated in the Berkeley Lab and IBM booths at the SC05 conference. Bob Coyne of IBM, the industry co-chair of the HPSS executive committee, said, “There are at least ten institutions at SC05 who are both HPSS and GPFS users, many with over a petabyte of data, who have expressed interest in this capability. HPSS/GPFS will not only serve these existing users but will be an important step in simplifying the storage tools of the largest supercomputer centers and making them available to research institutions, universities, and commercial users.”

“Globally accessible data is becoming the most important part of Grid computing,” said Phil Andrews of the San Diego Supercomputer Center. “The immense quantity of information demands full vertical integration from a transparent user interface via a high performance filesystem to an enormously capable archival manager. The integration of HPSS and GPFS closes the gap between the long-term archival storage and the ultra high performance user access mechanisms.”

The GPFS/HPSS HSM will be included in the release of HPSS 6.2 in spring 2006.

Integrating HPSS into Grids

NERSC’s Mass Storage Group is currently involved in another development collaboration, this one with Argonne National Laboratory and IBM, to integrate HPSS accessibility into the Globus Toolkit for Grid applications.

At Argonne, researchers are adding functionality to the Grid file transfer daemon2 so that the appropriate class of service can be requested from HPSS. IBM is contributing the development of an easy-to-call library of parallel I/O routines that work with HPSS structures and are also easy to integrate into the file transfer deamon. This library will ensure that Grid file transfer requests to HPSS movers are handled correctly.

NERSC is providing the HPSS platform and testbed system for IBM and Argonne to do their respective development projects. As pieces are completed, NERSC tests the components and works with the developers to help identify and resolve problems.

The public release of this capability is scheduled with HPSS 6.2, as well as future releases of the Globus Toolkit.

Bay Area MAN Inaugurated

On August 23, 2005, the NERSC Center became the first of six DOE research sites to go into full production on the Energy Science Network’s (ESnet’s) new San Francisco Bay Area Metropolitan Area Network (MAN). The new MAN provides dual connectivity at 20 to 30 gigabits per second (10 to 50 times the previous site bandwidths, depending on the site using the ring) while significantly reducing the overall cost.

The connection to NERSC consists of two 10-gigabit Ethernet links. One link is used for production scientific computing traffic, while the second is dedicated to special networking needs, such as moving terabyte-scale datasets between research sites or transferring large datasets which are not TCP-friendly.

“What this means is that NERSC is now connected to ESnet at the same speed as ESnet’s backbone network,” said ESnet engineer Eli Dart.

The new architecture is designed to meet the increasing demand for network bandwidth and advanced network services as next-generation scientific instruments and supercomputers come on line. Through a contract with Qwest Communications, the San Francisco Bay Area MAN provides dual connectivity to six DOE sites—the Stanford Linear Accelerator Center, Lawrence Berkeley National Laboratory, the Joint Genome Institute, NERSC, Lawrence Livermore National Laboratory, and Sandia National Laboratories/California (Figure 11). The MAN also provides high-speed access to California’s higher education network (CENIC), NASA’s Ames Research Center, and DOE’s R&D network, Ultra Science Net. The Bay Area MAN connects to both the existing ESnet production backbone and the first segments of the new Science Data Network backbone.

The connection between the MAN and NERSC was formally inaugurated on June 24 by DOE Office of Science Director Raymond Orbach and Berkeley Lab Director Steven Chu (Figure 12).


Figure 11. ESnet’s new San Francisco Bay Area Metropolitan Area Network provides dual connectivity at 20 to 30 gigabits per second to six DOE sites and NASA Ames Research Center.

Figure 12. DOE Office of Science Director Raymond Orbach (left) and Berkeley Lab Director Steven Chu made the ceremonial connection between NERSC and ESnet in June. After testing, the full production connection was launched in August.

Another Checkpoint/ Restart Milestone

On the weekend of June 11 and 12, 2005, IBM personnel used NERSC’s Seaborg supercomputer for dedicated testing of IBM’s latest HPC Software Stack, a set of tools for high performance computing. To maximize system utilization for NERSC users, instead of “draining” the system (letting running jobs continue to completion) before starting this dedicated testing, NERSC staff checkpointed all running jobs at the start of the testing period. “Checkpointing” means stopping a program in progress and saving the current state of the program and its data—in effect, “bookmarking” where the program left off so it can start up later in exactly the same place.

This is believed to be the first full-scale use of the checkpoint/restart software with an actual production workload on an IBM SP, as well as the first checkpoint/restart on a system with more than 2,000 processors. It is the culmination of a collaborative effort between NERSC and IBM that began in 1999. Of the 44 jobs that were checkpointed, approximately 65% checkpointed successfully. Of the 15 jobs that did not checkpoint successfully, only 7 jobs were deleted from the queuing system, while the rest were requeued to run again at a later time. This test enabled NERSC and IBM staff to identify some previously undetected problems with the checkpoint/restart software, and they are now working to fix those problems.

In 1997 NERSC made history by being the first computing center to achieve successful checkpoint/restart on a massively parallel system, the Cray T3E.

Science-Driven System Architecture

The creation of NERSC’s Science-Driven System Architecture (SDSA) Team formalizes an ongoing effort to monitor and influence the direction of technology development for the benefit of computational science. NERSC staff are collaborating with scientists and computer vendors to refine computer systems under current or future development so that they will provide excellent sustained performance per dollar for the broadest possible range of large-scale scientific applications.

While the goal of SDSA may seem ambitious, the actual work that promotes that goal deals with the nitty-gritty of scientific computing—for example, why does a particular algorithm perform well on one system but poorly on another—at a level of detail that some people might find tedious or overwhelming, but which the SDSA team finds fascinating and challenging.

“All of our architectural problems would be solvable if money were no object,” said SDSA Team Leader John Shalf, “but that’s never the case, so we have to collaborate with the vendors in a continuous, iterative fashion to work towards more efficient and cost-effective solutions. We’re not improving performance for its own sake, but we are improving system effectiveness.”

Much of the SDSA work involves performance analysis: how fast do various scientific codes run on different systems, how well do they scale to hundreds or thousands of processors, what kinds of bottlenecks can slow them down, and how can performance be improved through hardware development. A solid base of performance data is particularly useful when combined with workload analysis, which considers what codes and algorithms are common to NERSC’s diverse scientific workload. These two sets of data lay a foundation for assessing how that workload would perform on alternative system architectures. Current architectures may be directly analyzed, while future architectures may be tested through simulations or predictive models.

The SDSA Team is investigating a number of different performance modeling frameworks, such as the San Diego Supercomputer Center’s Memory Access Pattern Signature (MAPS), in order to assess their accuracy in predicting performance for the NERSC workload. SDSA team members are working closely with San Diego’s Performance Modeling and Characterization Laboratory to model the performance of the NERSC-5 SSP benchmarks and compare the performance predictions to the benchmark results collected on existing and proposed HPC systems.

Another important part of the SDSA team’s work is sharing performance and workload data, along with benchmarking and performance monitoring codes, with others in the HPC community. Benchmarking suites, containing application codes or their algorithmic kernels, are widely used for system assessment and procurement. NERSC has recently shared its SSP benchmarking suite with National Science Foundation (NSF) computer centers. With the Defense Department’s HPC Modernization Program, NERSC has shared benchmarks and jointly developed a new one.

Seemingly mundane activities like these can have an important cumulative impact: as more research institutions set specific goals for application performance in their system procurement specifications, HPC vendors have to respond by offering systems that are specifically designed and tuned to meet the needs of scientists and engineers, rather than proposing strictly off-the-shelf systems. By working together and sharing performance data with NERSC and other computer centers, the vendors can improve their competitive position in future HPC procurements, refining their system designs to redress any architectural bottlenecks discovered through the iterative process of benchmarking and performance modeling. The end result is systems better suited for scientific applications and a better-defined niche market for scientific computing that is distinct from the business and commercial HPC market.

The SDSA Team also collaborates on research projects in HPC architecture. One key project, in which NERSC is collaborating with Berkeley Lab’s Computational Research Division and computer vendors, is ViVA, or Virtual Vector Architecture. The ViVA concept involves hardware and software enhancements that would coordinate a set of commodity scalar processors to function like a single, more powerful vector processor. ViVA would enable much faster performance for certain types of widely used scientific algorithms, but without the high cost of specialized processors. The research is proceeding in phases. ViVA-1 is focused on a fast synchronization register to coordinate processors on a node or multicore chip. ViVA-2 is investigating a vector register set that hides latency to memory using vector-like semantics. Benchmark scientific kernels are being run on an architectural simulator with ViVA enhancements to assess the effectiveness of those enhancements.

Perhaps the most ambitious HPC research project currently under way is the Defense Advanced Research Projects Agency’s (DARPA’s) High Productivity Computer Systems (HPCS) program. HPCS aims to develop a new generation of hardware and software technologies that will take supercomputing to the petascale level and increase overall system productivity ten-fold by the end of this decade. NERSC is one of several “mission partners” participating in the review of proposals and milestones for this project.

Proposals for New System Evaluated

As part of NERSC’s regular computational system acquisition cycle, the NERSC-5 procurement team was formed in October 2004 to develop an acquisition plan, select and test benchmarks, and prepare a request for proposals (RFP). The RFP was released in September 2005; proposals were submitted in November and are currently being evaluated. The RFP set the following general goals for the NERSC-5 system:

  • Support the entire NERSC workload, specifically addressing the DOE Greenbook recommendations.
  • Integrate with the NERSC environment, including the NERSC Global Filesystem, HPSS, Grid software, security and networking systems, and the user environment (software tools).
  • Provide the optimal balance of the following system components:
    • computational: CPU speed, memory bandwidth, and latency
    • memory: aggregate and per parallel task
    • global disk storage: capacity and bandw

komputer 5

Pengetahuan

Apa Itu Router?

Didalam packet-switched networks seperti Internet, Router adalah sebuah peralatan atau dalam beberapa kasus, Router dapat berupa sebuah perangkat lunak dalam komputer, yang menentukan titik jaringan berikutnya sebagai tujuan kemana paket data tersebut harus dikirim. Router terhubung dengan setidaknya 2 buah jaringan yang berbeda dan menentukan lewat mana sebaiknya data dikirm. Router terletak di setiap Gateway (dimana sebuah jaringan bertemu dengan jaringan yang lain), Router juga sering dimasukkan dalam bagian dari network switch.

Secara umum Router digunakan untuk menghubungkan 2 buah network yang berbeda sehingga client dari masing-masing network tersebut dapat saling berhubungan dan berkomunikasi. Seperti misalnya untuk menghubungkan network local (192.168.x.x) dengan internet (202.x.x.x). Referensi 11xxx.

Router dapat membuat atau menjaga tabel jalur/rute dan kondisinya sehingga dapat ditentukan dengan algoritma khusus mana jalur terbaik yang harus dipakai untuk melewatkan paket data. Paket data dapat melalui beberapa titik jaringan (network points) sebelum mencapai tujuan.

Untuk pengguna komputer yang mempunyai koneksi internet (high speed connection) seperti kabel, satelit atau DSL, Router dapat juga bertindak sebagai Firewall, walaupun user hanya mempunyai sebuah komputer saja. Para IT engineer percaya bahwa penggunaan Router memberikan proteksi yang lebih baik terhadap hacking dari pada penggunaan software Firewall. Hal ini dikarenakan tidak ada Internet Protocol address (IP Address) yang langsung mengakses internet. Ini membuat port scan, sebuah teknik untuk mencari kelemahan sistem user, hampir tidak dimungkinkan. Selain itu Router tidak mengkonsumsi resources dari komputer seperti dalam penggunaan software Firewall. Router komersil sangat mudah di-install dan tersedia untuk jaringan kabel atau wireless dengan harga terjangkau.

CHANNEL-11 sangat merekomendasikan penggunaan Router di komputer user sebelum berhadapan/ berkoneksi dengan internet. Dalam hal ini sepenuhnya diserahkan ke user apakah akan memakai PC Router atau menggunakan Router komersil yang banyak dijual di toko komputer. Jika dibutuhkan, CHANNEL-11 dapat langsung memasukkan Broadband Router (Router komersil) dalam paket berlangganan anda sebagai biaya peralatan (one time charged). Untuk pemakaian jaringan full wireless, CHANNEL-11 juga dapat menyediakan pemakaian Router+Access Point. Keuntungan pemakaian Router yang lain yaitu sebuah Router dapat membuat DHCP server sehingga setiap komputer yang terhubung dengan router tersebut akan langsung mendapatkan sebuah IP secara otomatis.

LAN (Local Area Network)

Dalam sebuah jaringan komputer biasanya terhubung banyak komputer ke sebuah atau beberapa server. Server adalah komputer yang difungsikan sebagai “pelayan” pengiriman data dan atau penerimaan data serta mengatur pengiriman dan penerimaan data diantara komputer-komputer yang tersambung. Fungsi pelayanan ini dimungkin oleh adanya penggunaan perangkat lunak khusus untuk server. Perangkat lunak yang dulu dikenal antara lain Xenix dari IBM, UNIX, Novell dan Microsoft Windows 3.11 dan beberapa merk lainnya. Saat ini yang umum dipergunakan orang adalah perangkat lunak Novell dan Windows NT dari jenis-jenis keluaran terbaru yang memiliki kompatibilitas dengan jaringan internet. Kompatibilitas ini atau kecocokan ini dimungkinkan oleh karena perusahaan produsennya telah mengembangkan produknya dengan menambahkan sistem TCP/IP. Sistem TCP/IP dipergunakan dalam jaringan internet sebagai sistem pengiriman meta data dan pengontrolannya.

Secara fisik, jaringan komputer merupakan komputer yang dihubungkan dengan kabel data. Ada beragam jenis kabel data yang dibuat untuk penggunaan tertentu seperti kabel RG 58 untuk didalam ruangan, dapat juga mempergunakan kabel UTP. Untuk hubungan jaringan komputer antar gedung dapat dipergunakan kabel RG8 atau yang dikenal sebagai kabel backbone. Apabila anda membangun jairngan komputer antar gedung sebaiknya memperhatikan keamanannya dari gangguan petir.

Model penyambungan antara komputer didalam sebuah jaringan komputer juga ada beberapa macam yang secara umum ada 3 buah model yaitu :

1. Model BUS, dimana komputer dan server dihubungkan pada sebuah kabel saja secara berderet. ujung-ujung kabel data diberi komponen elektronik yang disebut terminator, yaitu semacam resistor terbungkus logam dengan nilai tahanan sebesar 50 ohm.

2. Model Star, dalam model ini dipergunakan alat tambahan yang disebut hub sebagai penghubungnya. Hub memiliki lubang konektor sejumlah tertentu, ada yang memiliki 8 buah lubang koneksi (disebut port), 12 port atau 16 port dan 24 port. Kabel data dari masing-masing komputer atau server dihubungkan pada alat ini.

3. Model Token Ring, dalam hubungan komputer model ini, kabel penghubung antar komputer dibuat seperti lingkaran (ring). Komputer yang dihubungkan secara berderet pada sebuah kabel data kemudian ujung satu dan ujung satunya lagi dari kabel tersebut dihubungkan.

Ketiga model tersebut dapat dibuat berdiri sendiri atau dalam jaringan yang besar dapat juga digabungkan sesuai dengan kondisi setempat dan rencana penggunaannya.

Komunikasi data antara komputer tersebut dilakukan oleh protokol komunikasi data. Ada beberapa jenis protokol seperti ipx/spx, netbeui, tcp/ip dan protocol-protokol lainnya.

Dalam jaringan komputer yang terhubung ke internet, server memiliki beberapa fungsi. Yang umum dibuat adalah gateway (gerbang), web server (untuk penyimpanan homepage atau web site), mail server (untuk pelayanan elektronik mail). Dalam penggunaan khusus, sering juga dibuat proxy server, cache server, firewall server atau name server dan router. Informasi mengenai jaringan komputer dan teknik komunikasi data dapat anda cari di situs FreeBsd atau situs Linux. Atau dapat juga anda berkenalan dengan kelompok pengembang jaringan komputer di Institut Teknologi Bandung (ITB) yaitu Computer Network Research Group (CNRG) atau di Jurusan Teknik Informatika dan Jurusan Teknik Elektro yang ada di ITB.

KOMPUTER 4

PENGENALAN WIRELES LAN

Posted by itokwrote in Computer Networking.
trackback

networks.jpg

Wireless Local Area Network (WLAN) adalah jaringan komputer yang menggunakan gelombang radio sebagai media transmisi data. Informasi (data) ditransfer dari satu komputer ke komputer lain menggunakan gelombang radio. WLAN sering disebut sebagai Jaringan Nirkabel atau jaringan wireless.


Proses komunikasi tanpa kabel ini dimulai dengan bermunculannya peralatan berbasis gelombang radio, seperti walkie talkie, remote control, cordless phone, ponsel, dan peralatan radio lainnya. Lalu adanya kebutuhan untuk menjadikan komputer sebagai barang yang mudah dibawa (mobile) dan mudah digabungkan dengan jaringan yang sudah ada. Hal-hal seperti ini akhirnya mendorong pengembangan teknilogi wireless untuk jaringan komputer.
Pada tahun 1997, sebuah lembaga independen bernama IEEE membuat spesifikasi/standar WLAN pertama yang diberi kode 802.11. Peralatan yang sesuai standar 802.11 dapat bekerja pada frekuensi 2,4GHz, dan kecepatan transfer data (throughput) teoritis maksimal 2Mbps. Sayangnya peralatan yang mengikuti spesifikasi 802.11 kurang diterima dipasar. Througput sebesar ini dianggap kurang memadai untuk aplikasi multimedia dan aplikasi kelas berat lainnya.
Pada bulan Juli 1999, IEEE kembali mengeluarkan spesifikasi baru bernama 802.11b. Kecepatan transfer data teoritis maksimal yang dapat dicapai adalah 11 Mbps. Kecepatan tranfer data sebesar ini sebanding dengan Ethernet tradisional (IEEE 802.3 10Mbps atau 10Base-T). Peralatan yang menggunakan standar 802.11b juga bekerja pada frekuensi 2,4Ghz. Salah satu kekurangan peralatan wireless yang bekerja pada frekuensi ini adalah kemungkinan terjadinya interferensi dengan cordless phone, microwave oven, atau peralatan lain yang menggunakan gelombang radio pada frekuensi sama.
Pada saat hampir bersamaan, IEEE membuat spesifikasi 802.11a yang menggunakan teknik berbeda. Frekuensi yang digunakan 5Ghz, dan mendukung kecepatan transfer data teoritis maksimal sampai 54Mbps. Gelombang radio yang dipancarkan oleh peralatan 802.11a relatif sukar menembus dinding atau penghalang lainnya. Jarak jangkau gelombang radio relatif lebih pendek dibandingkan 802.11b. Secara teknis, 802.11b tidak kompatibel dengan 802.11a. Namun saat ini cukup banyak pabrik hardware yang membuat peralatan yang mendukung kedua standar tersebut.
Pada tahun 2002, IEEE membuat spesifikasi baru yang dapat menggabungkan kelebihan 802.11b dan 802.11a. Spesifikasi yang diberi kode 802.11g ini bekerja pada frekuensi 2,4Ghz dengan kecepatan transfer data teoritis maksimal 54Mbps. Peralatan 802.11g kompatibel dengan 802.11b, sehingga dapat saling dipertukarkan. Misalkan saja sebuah komputer yang menggunakan kartu jaringan 802.11g dapat memanfaatkan access point 802.11b, dan sebaliknya.
Ada beberapa istilah yang cukup popular berkaitan dengan wireless. Beberapa di antaranya yaitu:

1. Wi-Fi atau WiFi
Wi-Fi atau Wireless Fidelity adalah nama lain yang diberikan untuk produk yang mengikuti spesifikasi 802.11. Sebagian besar pengguna komputer lebih mengenal istilah Wi-Fi card/adapter dibandingkan dengan 802.11 card/adapter. Wi-Fi merupakan merek dagang, dan lebih popular dibandingkan kata ?IEEE 802.11?.

2. Channel
Bayangkanlah pita frekuansi seperti sebuah jalan, dan channel seperti jalur-jalur pemisah pada jalan tersebut. Peralatan 802.11a bekerja pada frekuensi 5,15 - 5,875 GHz, sedangkan peralatan 802.11b dan 802.11g bekerja pada frekuansi 2,4 - 2,497 GHz. Jadi , 802.11a menggunakan pita frekuensi lebih besar dibandingkan 802.11b atau 802.11g. Semakin lebar pita frekuensi, semakin banyak channel yang tersedia.
Setiap channel dapat digunakan untuk mengangkut informasi secara penuh. Pada 802.11a tersedia sampai 8 non-overlapping channel. Masing-masing dapat ?dibebani? throughput sebesar 54Mbps, atau total throughput 432Mbps. Sedangkan pada 802.11b/g tersedia 3 non-overlapping channel yang masing-masing dapat ?dibebani? throughput sampai 11Mbps, atau total throughput 33Mbps.
Agar dapat saling berkomunikasi, setiap peralatan wireless harus menggunakan channel yang sama. Pengguna dapat mengatur nomor channel saat melakukan instalasi driver atau melalui utiliti bantu yang disediakan masing-masing vendor.

3. MIMO
MIMO (Multiple Input Multiple Output) merupakan teknologi Wi-Fi terbaru. MIMO dibuat berdasarkan spesifikasi Pre-802.11n. Kata ?Pre-? menyatakan ?Prestandard versions of 802.11n?. MIMO menawarkan peningkatan throughput, keunggulan reabilitas, dan peningkatan jumlah klien yg terkoneksi. Daya tembus MIMO terhadap penghalang lebih baik, selain itu jangkauannya lebih luas sehingga Anda dapat menempatkan laptop atau klien Wi-Fi sesuka hati. Access Point MIMO dapat menjangkau berbagai perlatan Wi-Fi yg ada disetiap sudut ruangan.
Secara teknis MIMO lebih unggul dibandingkan saudara tuanya 802.11a/b/g. Access Point MIMO dapat mengenali gelombang radio yang dipancarkan oleh adapter Wi-Fi 802.11a/b/g. MIMO mendukung kompatibilitas mundur dengan 802.11 a/b/g. Peralatan Wi-Fi MIMO dapat menghasilkan kecepatan transfer data sebesar 108Mbps.

4. WEP
WEP (Wired Equivalent Privacy) merupakan salah satu fitur keamanan/sekuriti yang bersifat build-in pada peralatan Wi-Fi. Keamanan merupakan masalah yang serius bagi pengguna Wi-Fi akibat gelombang radio yang dipancarkan adapter Wi-Fi dapat diterima oleh semua peralatan Wi-Fi yang ada di sekitarnya (atau gedung di sebelahnya). Tentu saja kondisi semacam ini sangat rawan krn informasi dapat ?ditangkap? dengan mudah. Oleh sebab itu Wi-Fi dibuat dengan beberapa jenis enkripsi : 40 bit, 64 bit, 128 bit dan 256 bit. Pengguna WEP akan meningkatkan keamanan data yang ditransfer meskipun konsekuensinya penurunan throughput data.

5. SSID
SSID (Service Set IDentifier) merupakan identifikasi atau nama untuk jaringan wireless. Setiap peralatan Wi-Fi harus menggunakan SSID tertentu. Peralatan Wi-Fi dianggap satu jaringan jika mengunakan SSID yang sama. Agar dapat berkomunikasi, setiap perlatan wireless haruslah menggunakan SSID dan channel yang sama. SSID bersifat case-sensitive, penulisan huruf besar dan huruf kecil sangat berpengaruh.

6. SES
SES merupakan singkatan dari SecureEasySetup. SES merupakan jawaban terhadap kesulitan setup security jaringan yg selama ini dirasakan sejumlah kalangan. Hanya dengan menekan satu tombol, SES secara otomatis memberikan SSID dan kode sekuriti ke router dan adapter serta menerapkan security WPA (Wireless Protected Access). Untuk menggunakan SES, pengguna hanya perlu menekan tombol SES pada router, lalu pada client, dan selanjutnya kedua perangkat akan membuat sebuah jalur komunikasi yang aman.

KOMPUTER 3

JARINGAN KOMPUTER

Jaringan komputer terdiri atas sejumlah host dan konektivitasnya .

  • Host dapat berupa sebuah komputer: PC, mini atau jenis komputer lainnya.
  • Konektivitas dalam jaringan komputer berdasarkan media penghubungnya :
    • wire (kabel)
      • ethernet
      • modem
    • wireless (tanpa kabel)
      • radio modem
      • infrared

    akses.gif (8839 bytes)

Pengalamatan jaringan TCP/IP

  • Alamat memiliki bentuk. Ada pola.
    1. nomor telepon: [kode negara]-[area]-[nomor pesawat]
    2. Alamat rumah: [nama jalan] [nomor] [kota]
  • Ada hubungan antara nama dan alamat (siapa, dimana) dan disimpan dalam suatu sistem, buku alamat.
  • Format alamat 32 bit (4 oktet; oktet adalah sistem bilangan 8 biner).
  • Contoh: 1F.A3.4B.27 (Hexadesimal) = 31.163.75.39 (desimal)
  • Minimal: 00.00.00.00
  • Maksimal: 255.255.255.255
  • Kepemilikan alamat IP dicatat oleh Network Information Center (NIC)
  • Alamat TCP/IP terdiri atas bagian NETWORK dan bagian HOST
  • Bagian network mencakup alamat jaringan (biasa disebut sebagai NETWORK) dan netmask.
  • Netmask merupakan penyaring (masker) yang menunjukkan bagian NETWORK dari sebuah alamat.

Contoh 1:

Sejumlah komputer terhubung dalam satu jaringan dengan ketentuan sbb:

  • network: 10.2.3.0
  • netmask: 255.255.255.0
  • broadcast: 10.2.3.255
  • host: 10.2.3.1 s/d 10.2.3.254

Dengan demikian, dalam jaringan tersebut maksimum dapat menampung 254 host.

Dalam pelatihan ini, tidak dijelaskan secara rinci tentang perhitungan biner dan mask. Peserta dianggap cukup mengerti bahwa "255" berarti keseluruhan oktet digunakan sebagai alamat jaringan. Contoh kedua untuk menyatakan bahwa adanya penjelasan yang lebih tepat apabila peserta telah memahami perhitungan biner.

Contoh 2:

Sejumlah komputer terhubung dalam satu jaringan dengan ketentuan sbb:

  • network: 167.205.23.16
  • netmask: 255.255.255.240
  • broadcast: 167.205.23.31
  • host: 167.205.23.17 s/d 167.205.23.30

Dengan demikian, dalam jaringan tersebut maksimum dapat menampung 14 host.

Pembagian alamat jenis class berdasarkan oktet pertama, yaitu:

  1. 0 s/d 127 (WAN?)
  2. 128 s/d 181 (MAN?)
  3. 192 s/d 223 (LAN?)
  4. 224 s/d 239 (Multicast)
  5. Experimental/extended : 240 s/d 255

Alamat khusus

  • Identifikasi jaringan
  • Alamat broadcast
  • Loopback (127.0.0.1): alamat yang menyatakan diri sendiri ("aku").
  • Private Network: alamat-alamat tertentu yang boleh digunakan secara bebas, tetapi tidak ada di internet.

Contoh:

Kelas A B C D E
Jaringan 5.0.0.0 167.205.0.0 192.168.2.0 224.0.0.9 241.23.5.2
Netmask 255.0.0.0 255.255.0.0 255.255.255.0
Broadcast 5.255.255.255 167.205.255.255 192.168.2.255 224.0.0.9

Pengelompokan kelas tersebut tidak efisien, banyak alamat yang tidak digunakan dan sulit mengendalikan jaringan kelas tinggi. Dalam praktek sebuah jaringan kelas apa pun dibagi lagi ke dalam sub-jaringan yang lebih kecil.

Utilitas UNIX: ifconfig

myHost# ifconfig ec0
ec0: flags=807
inet 192.168.2.5 netmask fffff00 broadcast 192.168.2.255

myHost# ifconfig lo0
lo0: flags=49
inet 127.0.0.1 netmask ff000000


myHost# ifconfig -a
---tampilkan semua interface dan settingnya

ec0 dan lo0 adalah kode alat (device) yang menghubungkan komputer ke jaringan.

Program ifconfig digunakan untuk:

  • mengaktifkan dan mendeaktifkan interface
  • mengaktifkan dan mendeaktifkan ARP pada interface (lihat alamat fisik)
  • mengaktifkan dan mendeaktifkan modus debug atas interface
  • menentukan alamat host, subnet mask, dan metode routing.

Utilitas MS-Windows: WINIPCFG atau IPCONFIG

MS-Windows: Start - Run -
Ketik: winipcfg
Ditampilkan:
         

Mengganti konfigurasi IP dan netmask dari control panel - network - protocol - TCP/IP

  • Alamat IP
  • netmask
  • Gateway, host yang menghubungkan jaringan tersebut ke jaringan lain.
  • Arti DNS akan dijelaskan pada bagian berikutnya
  • Nama host dapat ditentukan sendiri dan bisa melanggar ketentuan dari DNS
  • Isian alamat server DNS berupa IP
  • Domain suffix untuk mempermudah pemanjangan nama.

Memeriksa Keterhubungan

Untuk memeriksa apakah sebuah komputer sudah terhubung dengan benar pada jaringan, digunakan utilitas PING yang memantulkan pesan.

Utilitas UNIX/MS-Windows: ping

MS-Windows: Start - Program - MS-DOS Prompt
Ketik: ping
C:\WINDOWS>ping 127.0.0.1

Pinging 127.0.0.1 with 32 bytes of data:

Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128

Ping statistics for 127.0.0.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

Analisa:

  • Ada "reply" dari alamat yang dituju. Host kita terhubung dengan komputer dengan IP yang kita tuju.
  • Jumlah paket yang dikirim sama dengan yang diterima. Hubungannya baik, tidak ada data hilang.
  • Perhatikan kecepatan minimum, maksimum, dan rata-rata. Bandingkan dengan kecepatan yang diperoleh untuk PING host lain.

Routing

Apabila host yang dituju tidak berada dalam jaringan yang sama (alamat NETWORK berbeda), maka paket data dikirimkan ke GATEWAY untuk diteruskan (ROUTE). Untuk jaringan yang sederhana, biasanya hanya tersedia satu gateway menuju ke jaringan-jaringan lain. Apabila terdapat lebih dari satu gateway, maka diperlukan sebuah tabel yang menyimpan informasi tentang gateway mana yang digunakan untuk mencapai jaringan tertentu.

Utilitas UNIX/MS-WINDOWS: netstat

Netstat digunakan untuk mengetahui status jaringan (netstat singkatan dari network status).

MS-Windows: Start - Program - MS-DOS Prompt
Ketik: netstat -rn
C:\WINDOWS>netstat -rn

Route Table

Active Routes:

Network Address Netmask Gateway Address Interface Metric
127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1
255.255.255.255 255.255.255.255 255.255.255.255 0.0.0.0 1

Active Connections

Proto Local Address Foreign Address State

Apabila sebuah host berada di jaringan yang lain dan tidak dapat dihubungi, kemungkinan terjadi kesalahan dalam tabel routing atau ada hubungan yang terputus di suatu tempat. Perlu dilakukan penelusuran di mana terjadi gangguan menggunakan utilitas TRACEROUTE.

Utilitas UNIX: traceroute

Utilitas MS-Windows: tracert

MS-Windows: Start - Program - MS-DOS Prompt
Ketik: tracert
C:\WINDOWS>tracert 167.205.206.59

Tracing route to 167.205.206.59 over a maximum of 30 hops

1 <10 ms 1 ms <10 ms 10.1.1.12
2 1 ms 2 ms 2 ms 10.210.1.1
3 2 ms 2 ms 2 ms 167.205.206.59

Trace complete.

  • Bisa jadi sebuah jaringan atau sebuah host (karena berbagai alasan) tidak boleh dihubungi secara langsung. Atau bahkan tidak boleh diketahui keberadaannya meskipun berfungsi sebagai router. Ini menyebabkan adanya bagian yang putus (menghasilkan tanda '*') dalam penelusuran.
  • Telusuri node-node yang dilalui untuk mencapai komputer di fakultas/unit anda. Bandingkan dengan diagram jaringan universitas.
  • Penelusuran hendaknya dilakukan dua arah (dari kedua host yang berkomunikasi) sehingga diketahui apakah jalur yang digunakan sama atau tidak.

DEFINISI INTERNET

Istilah internet sedemikian populer, sehingga artinya menjadi rancu. Dalam pelatihan digunakan pengertian berikut ini:

  1. internet sebagai jaringan yang terhubung dalam internet protocol (IP) secara luas mencapai seluruh dunia.
  2. internet (inter-network) sebagai sejumlah jaringan fisik yang saling terhubung dengan protocol yang sama (apa saja) untuk membentuk jaringan logic, selanjutnya disebut sebagai inter-network.
  3. Internet sebagai komunitas jaringan komputer yang memberikan pelayanan http (world wide web). Dibedakan dengan intranet sebagai pelayanan http untuk kalangan terbatas. Pada mulanya pembatasan pada jaring fisik, yaitu LAN, kemudian berkembang termasuk pembatasan secara logis.
  4. Intranet sebagai jaringan TCP/IP untuk kalangan terbatas. Masyarakat umum mengartikan sebagai jaringan lokal (LAN) dengan pengalamatan private IP.
  5. Extranet sebagai jaringan TCP/IP untuk kalangan terbatas melalui internet umum. Tunneling, secure layer.

KOMPUTER 1

KOMPUTER

Sarana yang dimiliki sampai saat ini untuk mendukung kegiatan litbang dan pelayanan teknologi informasi, diantaranya sbb. :

SARANA JARINGAN KOMPUTER (Intranet/Internet)

  1. Sistem jaringan intranet/internet yang sudah beroperasi dengan baik;
  2. Koneksi ke backboone Internet mempergunakan Radio
    dan VSAT;
  3. Besaran Bandwidth 256 – 512 Kbps;
  4. Distribusi koneksi WAN mempergunakan Radio BaseStation dengan Antenna Sectroral 3600;
  5. Web, Mail, FTP, Database, dan Security Server;
  6. Infrastruktur Jaringan lokal mempergunakan Fiber Optic dan UTP Cable, serta perangkat Switching Hub Layer 3;
  7. Perangkat lunak: Microsoft Windows 2003 Enterprise Advanced Server, Redhat Linux Enterprise, Microsoft Exchange Server 5.5, Microsoft SQL Server, Microsoft Proxy/Host Integration Server dan Microsoft System Management Server, dan software pendukung lainnya.


PENGEMBANGAN SISTEM INFORMASI (SIM/SIG/REMOTE SENSING)
  1. Desktop PC dan Komputer Grafik (Workstation);
  2. Scanner A0 dan A4;
  3. Digitizer A0 dan A1;
  4. Printer dan Plotter resolusi tinggi (1200/2400 dpi);
  5. GPS (type Navigasi, Mapping, Geodetic) dan Radio Komunikasi GPS.
  6. Software :
    ErMapper v6.1, MapInfo Professional v7.5, ArcInfo ArcGIS, ArcView, Geomatic v9, Macromedia Studio, MapGuide, GeoMedia, MS-Access, MS-SQL Server, Delphi, Visual Basic, Datamine, Micromine, Asset Surveyor V.5, Pathfinder Office, MapSource, Trimble Geomatic Office, R2V, dan Software pendukung lainnya.

Thursday, November 29, 2007

untuk sahabat dary

ayolah kawan bina tali ukhuwah tegakan bersama cahya alqur'an janganlah kamu berkeluh kesah sa,butlah syahid tujuan