Telnet is a terminal emulation program for TCP/IP networks such as the Internet. The Telnet program runs on your computer and connects your PC to a server on the network. You can then enter commands through the Telnet program and they will be executed as if you were entering them directly on the server console. This enables you to control the server and communicate with other servers on the network. To start a Telnet session, you must log in to a server by entering a valid username and password.
A WAN is different from a LAN. Unlike a LAN, which connects workstations, peripherals, terminals, and other devices in a single building or other small geographic area, a WAN makes data connections across a broad geographic area. Companies use the WAN to connect various company sites together so information can be exchanged between distant offices
WAN Connection Types
A leased line, also known as a point-to-point or dedicated connection, provides a single, preestablished WAN communications path from the customer premises, through a service provider network, to a remote network This connection is reserved by the service provider for the client&qt;&qt;s private use.
Circuit switching is a switching system in which a dedicated circuit path must exist between sender and receiver for the duration of the "call." Circuit switching is used by the service provider network when providing basic telephone service or Integrated Services Digital Network (ISDN). Circuit switched connections are commonly used in environments that require only sporadic WAN usage. Basic telephone service is typically employed over an asynchronous serial connection.
Packet switching is a WAN switching method in which network devices share a single point-to-point link to transport packets from a source to a destination across a carrier network. Packet switched networks use virtual circuits (VCs) that provide end-to-end connectivity. Physical connections are provided by programmed switching devices.
- Router Access Lists manage IP traffic as network access grows and filter packets as they pass through the router.
- Access list applications include permitting or denying packets moving through a router, vty access to or from a router, custom queuing, and triggering of "dial-on-demand" routing.
- There are two general types of access lists: standard, that permits or denies output for an entire protocol suite based on the source address, and extended, that allows greater flexibility by being able to check for source and destination addresses as well as specific protocols and numbers.
- Access lists may be applied as either Inbound or Outbound access lists. In inbound access lists, incoming packets are processed before being routed to an outbound interface. In outbound access lists, incoming packets are routed to the outbound interface and then processed through the outbound access list.
- In terms of access lists, permit means to continue to process the packet through to the next access list test, deny means to discard the packet and the implicit deny ensures any packets not matching an access list are dropped.
- General guidelines for access list configuration include: most restrictive statements should be at the top of list, one access list per interface, per protocol, per direction, create access lists before applying them to interfaces, and every access list should have at least one permit statement.
- For IP, standard access lists use the number range 1 – 99 as an identifier and extended access lists use 100 – 199. For IPX, standard access lists use the number range 800 – 899 and extended access lists use 900 – 999.
- The parameters that the Cisco IOS IP access list checks include: port number, protocol, source address, and destination address.
- Address filtering occurs using access list address wildcard masking to identify how to check or ignore corresponding IP address bits.
Access List Configuration
- General guidelines for configuring access lists include ending all access lists with an implicit deny and ordering access lists with the more specific tests and tests that will test true frequently at the beginning of the access list.
- Standard access lists filter based on source address and mask while extended access lists filter based on source and destination address allowing more filtering control. In addition, extended access lists allow for filtering by protocol and port.
- To configure standard access lists, use the access list and access group commands. These commands identify the list number, identiy the source IP address and links the access list to an interface.
- The two steps for setting access lists are setting the parameters for the access test statement and enabling the interface to use the specified list.
- The IOS commands to enable an extended access list are the same as for enabling a standard access list, but they include additional parameters for configuration such as identification of specific protocols and ports. These commands are access list and access group.
- The two steps for setting extended access lists are setting the parameters for the access test statement and enabling the interface to use the specified list. The test statement may include source and destination addresses as well as protocols and port numbers.
- Named access lists allow for IP standard and extended access lists to be identified with an alphanumeric string, not a number. Named access lists allow you to delete, but not insert, individual entries from a specific access list.
- Place extended access lists close to the source of the traffic to be denied while standard access lists should be placed as near the destination as possible.
- Access lists can be used to control virtual terminal access (vty) to or from a router. Users can be denied access to a router or denied access to destinations from that router.
- The two commands used to configure a router for vty access are line vty, that places the router in line configuration mode, and access class, that links an existing access list to a terminal line or range of lines.
Newspapers Internet magazines came with cover stories when Denial of service (DoS) attacks assaulted a number of large and very successful companies&qt;&qt; websites last year. Those who claim to provide security tools were under attack. If Yahoo, Amazon, CNN and Microsoft feel victim to DoS attacks, can any site-owner feel safe?
In this article we&qt;&qt;ll try to make site owners understand the "In and Outs" of DoS and DDoS attack methods, vulnerabilities, and potential solutions to these problems. Webmasters are usually seen searching for solutions to new security threats and ways of patching-up before it is too late.
In a Denial of Service (DoS) attack, the attacker sends a stream of requests to a service on the server machine in the hope of exhausting all resources like "memory" or consuming all processor capacity.
DoS Attacks Involve:
- Jamming Networks
- Flooding Service Ports
- Misconfiguring Routers
- Flooding Mail Servers
In Distributed DoS (DDoS) attack, a hacker installs an agent or daemon on numerous hosts. The hacker sends a command to the master, which resides in any of the many hosts. The master communicates with the agents residing in other servers to commence the attack. DDoS are harder to combat because blocking a single IP address or network will not stop them. The traffic can derive from hundred or even thousands of individual systems and sometimes the users are not even aware that their computers are part of the attack.
DDoS Attacks Involve:
- FTP Bounce Attacks
- Port Scanning Attack
- Ping Flooding Attack
- Smurf Attack
- SYN Flooding Attack
- IP Fragmentation/Overlapping Fragment Attack
- IP Sequence Prediction Attack
- DNS Cache Poisoning
- SNMP Attack
- Send Mail Attack
Some of the more popular attack methods are described below.
FTP Bounce Attack
FTP (File Transfer Protocol) is used to transfer documents and data anonymously from local machine to the server and vice versa. All administrators of FTP servers should understand how this attack works. The FTP bounce attack is used to slip past application-based firewalls.
In a bounce attack, the hacker uploads a file to the FTP server and then requests this file be sent to an internal server. The file can contain malicious software or a simple script that occupies the internal server and uses up all the memory and CPU resources.
To avoid these attacks, the FTP daemon on the Web servers should be updated regularly. The site FTP should me monitored regularly to check whether any unknown file is transferred to the Web server. Firewalls also help by filtering content and commands. Some firewalls block certain file extensions, a technique that can help block the upload of malicious software.
Port Scanning Attack
A port scan is when someone is using software to systematically scan the entry points on other person’s machine. There are legitimate uses for this software in managing a network.
Most hackers enter another’s computer to leave unidentifiable harassing messages, capture passwords or change the set-up configuration. The defense for this is through, consistent network monitoring. There are free tools that monitor for port scans and related activity.
Ping Flooding Attack
Pinging involves one computer sending a signal to another computer expecting a response back. Responsible use of pinging provides information on the availability of a particular service. Ping Flooding is the extreme of sending thousands or millions of pings per second. Ping Flooding can cripple a system or even shut down an entire site.
A Ping Flooding Attack floods the victim’s network or machine with IP Ping packets. At least 18 operating systems are vulnerable to this attack, but the majority can be patched. There are also numerous routers and printers that are vulnerable. Patches cannot currently be applied throughout a global network easily.
A Smurf Attack is modification of the "ping attack” and instead of sending pings directly to the attacked system, they are sent to a broadcast address with the victim’s return address. A range of IP addresses from the intermediate system will send pings to the victim, bombarding the victim machine or system with hundreds or thousands of pings.
One solution is to prevent the Web server from being used as a broadcast. Routers must be configured to deny IP-Directed broadcasts from other networks into the network. Another helpful measure is to configure the router to block IP spoofing from the network to be saved. Routers configured as such will block any packets that donor originate in the Network. To be effective this must be done to all routers on the network.
SYN Flooding Attack
This attack exploits vulnerability in the TCP/IP communications protocol. This attack keeps the victim machine responding back to a non-existent system. The victim is sent packets and asked to response to a system or machine with an incorrect IP address. As it responds, it is flooded with the requests. The requests wait for a response until the packets begin to time out and are dropped. During the waiting period, the victim system is consumed by the request and cannot respond to legitimate requests.
When a normal TCP connection starts, a destination host receives a SYN (synchronize/start) packet from a source host and sends back a SYN ACK (synchronize acknowledge) response. The destination host must hear an acknowledgement, or ACK packet, of the SYN ACK before the connection is established. This is referred as the "TCP three-way handshake”.
Decreasing the time-out waiting period for the three way handshake can help to reduce the risk of SYN flooding attacks, as will increasing the size of the connection queue (the SYN ACK queue). Applying service packs to upgrade older operating systems is also a good countermeasure. More recent operating systems are resistant to these attacks.
IP Fragmentation/Overlapping Fragment Attack
To facilitate IP transmission over comparatively congested networks. IP packets can be reduced in size or broken into smaller packets. By making the packets very small, routers and intrusion detection systems cannot identify the packets contents and will let them pass through without any examination. When a packet is reassembled at the other end, it overflows the buffer. The machine will hang, reboot or may exhibit no effect at all.
In an Overlapping Fragment Attack, the reassembled packet starts in the middle of another packet. As the operating system receives these invalid packets, it allocates memory to hold them. This eventually uses all the memory resources and causes the machine to reboot or hang.
IP Sequence Prediction Attack
Using the SYN Flood method, a hacker can establish connection with a victim machine and obtain the IP packet sequence number in an IP Sequence Prediction Attack. With this number, the hacker can control the victim machine and fool it into believing it’s communicating with another network machines. The victim machine will provide requested services. Most operating systems now randomize their sequence numbers to reduce the possibility of prediction.
DNS Cache Poisoning
DNS provides distributed host information used for mapping domain names and IP addresses. To improve productivity, the DNS server caches the most recent data for quick retrieval. This cache can be attacked and the information spoofed to redirect a network connection or block access to the Web sites),a devious tactic called DNS cache poisoning.
The best defense again
st problems such as DNS cache poisoning is to run the latest version of the DNS software for the operating system in use. New versions track pending and serialize them to help prevent spoofing.
Most network devices support SNMP because it is active by default. An SNMP Attack can result in the network being mapped, and traffic can be monitored and redirected.
The best defense against this attack is upgrading toSNMP3, which encrypts passwords and messages. Since SNMP resides on almost all network devices, routers, hubs, switches, Servers and printers, the task of upgrading is huge. Some vendors now offer an SNMP Management tool that includes upgrade distribution for global networks.
UDP Flood Attack
AUDP Flood Attacks links two unsuspecting systems. By Spoofing, the UDP flood hooks up one system’s UDP service (which for testing purposes generates a series of characters for each packet it receives) with another system’s UDP echo service (which echoes any character it receives in an attempt to test network programs). As a result a non-stop flood of useless data passes between two systems.
Send Mail Attack
In this attack, hundreds of thousands of messages are sent in a short period of time; a normal load might only be 100 or1000 messages per hour. Attacks against Send Mail might not make the front page, but downtime on major websites will.
For companies whose reputation depends on the reliability and accuracy of their Web-Based transactions, a DoS attack can be a major embarrassment and a serious threat to business.
Frequent denial-of-service attacks and a change in strategy by "Black-Hat Hackers" are prompting enterprises to demand technology that proactively blocks malicious traffic.
Tools and services that reflect approaches to combat such DoS attacks have been introduced with time. These are normally upgrades to what was produced before. No solution is ever said to be an ultimate solution to defend DoS attacks. Despite the new technology coming everyday, the attacks are likely to continue.
A tree topology combines characteristics of linear bus and star topologies. It consists of groups of star-configured workstations connected to a linear bus backbone cable. Tree topologies allow for the expansion of an existing network and enable schools to configure a network to meet their needs.
Advantages of a Tree Topology
- Point-to-point wiring for individual segments.
- Supported by several hardware and software venders.
Disadvantages of a Tree Topology
- Overall length of each segment is limited by the type of cabling used.
- If the backbone line breaks, the entire segment goes down.
- More difficult to configure and wire than other topologies.
A consideration in setting up a tree topology using Ethernet protocol is the 5-4-3 rule. One aspect of the Ethernet protocol requires that a signal sent out on the network cable reach every part of the network within a specified length of time. Each concentrator or repeater that a signal goes through adds a small amount of time. This leads to the rule that between any two nodes on the network there can only be a maximum of 5 segments, connected through 4 repeaters/concentrators. In addition, only 3 of the segments may be populated (trunk) segments if they are made of coaxial cable. A populated segment is one which has one or more nodes attached to it . In Figure 4, the 5-4-3 rule is adhered to. The furthest two nodes on the network have 4 segments and 3 repeaters/concentrators between them.
This rule does not apply to other network protocols or Ethernet networks where all fiber optic cabling or a combination of a fiber backbone with UTP cabling is used. If there is a combination of fiber optic backbone and UTP cabling, the rule is simply translated to 7-6-5 rule.
The best html web page editors will make it easy to create nice looking websites. But the content you select and how you organize it is the difference between visitors clicking away immediately or not. I’ve identified ten common mistakes made by web design newbies. Use this as a checklist for your own web site designs:
1. Too Many Advertisements
Making money from your website is fine, as long as you don’t get too greedy by jamming too many ads everywhere. (Of course, by doing this, you will make less money anyway.) What’s “too many?” Go to your website as a visitor, and have someone else look it over, too. Trust your first impression and compare it to what your friend or family member thought about it. If you do have ads, make sure they’re not too intrusive or obnoxious–like flashing banners, for example.
2. Using Flash On Your Site
Bad idea! Don’t use a Flash intro on your website. Only rarely is it ever done tastefully and appropriately. Even when it is done well, it’s a waste of effort. It’s just too risky and the rewards are NONE. No matter how cool a flash intro may look, it has never, ever proved to increase traffic or website “stickiness.” It almost always has the opposite effect.
3. Plugin Overload
Keep media that uses plugins to a strict maximum of one per page. For example, if you’ve got Flash, then you shouldn’t have a media player, or if you’ve got a little program run by Java then you shouldn’t have Flash as well. Don’t be tempted to plug in stuff on your website just because you think it looks “cool.” Just because you can get free things to put on your site is not a good reason to use them. (What’s better: To eat free french fries or throw them out and save your health?)
4. Unclear Navigation
Many websites seem to make the simplest task take several steps to achieve. Remember that web surfers are in an extremely short attention-span mode and won’t stick around to try to figure out your website.
5. Unclear Website Theme
It’s easy to get tunnel vision about getting all the little tasks done to your new website and forget the big picture. What I’m talking about here is the need for you to have a main theme/main purpose for your site…One that is completely obvious in the first 3 seconds of someone landing on your homepage.
Think of T.V. ads that you finally figured out after seeing several times, because it wasn’t clear what the ad was about…It’s not that you weren’t smart enough to figure out those ads; you simply didn’t put out the mental effort–very slight though it may be–into figuring out the ad the first time. Because watching T.V. commercials is often no different than surfing the internet as far as the mental effort you’re willing to devote to it. But with the internet, your visitors are only going to see your site ONCE–for about 3 seconds before making a decision to stay or leave. If they leave, they’re never coming back, unlike T.V. commercials. So you have to make your message and your site very clear. Slight confusion = Back button or the red “X.”
6. Non-working Links
You’ve have to check all your links regularly to make sure that they all still work. Think of the times you find a site that has links that don’t work: You instantly make some assumptions about the site, like it’s outdated and unprofessional. It’s the same feeling as seeing simple words misspelled.
7. Non-Standard Fonts
Never get too cute and creative with your website fonts. Stick to the most common web fonts like Verdana, Arial, Times New Roman, and Tahoma. Besides, many of your potential visitors won’t have other fonts installed in their browsers and computers, so all they’ll see is gobbledygook. If you do want to use non-standard fonts, then limit it to graphic header images or logos–where they’re displayed as part of the image.
8. Oversized Or Undersized Fonts
It’s important to keep your text around the standard size–like 10, 11, or 12. Making text too big or too small makes it hard to read and is a great way to get your visitors to leave immediately.
9. Bright Colors
You’ve seen those webpages that have a dark background with bright-colored or white text. Or sites with very bright or very dark margins, which contrasts with the actual webpage content. All of these puts a strain on the eyes, which makes it hard to read and = Back Button. The tried and true method is to use black text on a white background.
10. Text Stretching Entire Width of Browser
Look at this webpage: http://www.craftown.com/candles.htm and you’ll see an example of how not to use auto-sizing webpages. The idea behind auto-sizing webpages is to use your visitor’s whole desktop monitor space instead of limiting the page to say, 750 pixels wide. This way, on small monitors the webpage looks large enough to read. But the problem comes with larger monitors where the text gets stretched way further than is normal for reading. And nowadays, most people have larger monitors. The fix? If you do use auto-sizing webpages, be sure to limit the text to a specific pixel amount. Other than that, don’t bother with it.
The Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols was developed as part of the research done by the Defense Advanced Research Projects Agency (DARPA). Later, TCP/IP was included with the Berkeley Software Distribution of UNIX.
The Internet protocols can be used to communicate across any set of interconnected networks. They are equally well suited for both LAN and WAN communication.
The TCP/IP Protocol Stack
The TCP/IP protocol stack maps closely to the OSI reference model in the lower layers. All standard physical and data-link protocols are supported
Application layer Application protocols exist for file transfer, e-mail, and remote login. Network management is also supported at the application layer.
Transport Layer Transport services allow users to segment and reassemble several upper-layer applications onto the same transport-layer data stream. The transport layer performs two functions:
- Flow control provided by sliding windows
- Reliability provided by sequence numbers and acknowledgments
Two protocols are provided at the transport layer: TCP and UDP . TCP is a connection-oriented, reliable protocol located in the transport layer of the TCP/IP Protocol Stack. UDP is a TCP/IP Transport Layer protocol designed for applications that provide their own error recovery process. It trades reliability for speed.
Internet Layer Several protocols operate at the TCP/IP Internet layer, which corresponds to the OSI network layer:
- IP provides connectionless, best-effort delivery routing of datagrams. It is not concerned with the content of the datagrams. Instead, it looks for a way to move the datagrams to their destination.
- ICMP provides control and messaging capabilities.
- ARP determines the data link layer address for known IP addresses.
- RARP determines network addresses when data link layer addresses are known.
TCP Connection Establishment
TCP is connection oriented, so it requires connection establishment before data transfer begins.
For a connection to be established or initialized, the two hosts must synchronize on each other&qt;&qt;s Initial Sequence Numbers (ISN) . Synchronization is done in an exchange of connection establishing segments carrying a control bit called SYN (for synchronize) and the initial sequence numbers. As a shorthand, segments carrying the SYN bit are also called "SYNs." Hence, the solution requires a suitable mechanism for picking an initial sequence number and a slightly involved handshake to exchange the ISNs.
The synchronization requires each side to send its own initial sequence number and to receive a confirmation of it in acknowledgement (ACK) from the other side. Each side must also receive the other side&qt;&qt;s initial sequence number and send a confirming ACK. This exchange is called the three-way handshake
TCP Simple Acknowledgment
The window size determines how much data the receiving station can accept at one time. With a window size of one, each segment must be acknowledged before another segment is transmitted, which results in inefficient use of bandwidth by the hosts.
To govern the flow of data between devices, TCP uses a flow control mechanism. The receiving TCP reports a "window" to the sending TCP. This window specifies the number of octets, starting with the acknowledgement number, that the receiving TCP is currently prepared to receive.
TCP window sizes are variable during the lifetime of a connection. Each acknowledgement contains a window advertisement that indicates how many bytes the receiver can accept. TCP also maintains a congestion control window, which is normally the same size as the receiver&qt;&qt;s window, but is cut in half when a segment is lost (for example, when there is congestion). This approach permits the window to be expanded or contracted as necessary to manage bufferspace and processing. A larger window size allows more data to be processed.
A star topology is designed with each node (file server, workstations, and peripherals) connected directly to a central network hub or concentrator.
Data on a star network passes through the hub or concentrator before continuing to its destination. The hub or concentrator manages and controls all functions of the network. It also acts as a repeater for the data flow. This configuration is common with twisted pair cable; however, it can also be used with coaxial cable or fiber optic cable.
Advantages of a Star Topology
Easy to install and wire.
No disruptions to the network then connecting or removing devices.
Easy to detect faults and to remove parts.
Disadvantages of a Star Topology
Requires more cable length than a linear topology.
If the hub or concentrator fails, nodes attached are disabled.
More expensive than linear bus topologies because of the cost of the concentrators.
Remote file copy – Synchronize file trees across local disks, directories or across a network.
# Local file to Local file
rsync [option]… Source [Source]… Dest
# Local to Remote
rsync [option]… Source [Source]… [user@]host:Dest
rsync [option]… Source [Source]… [user@]host::Dest
# Remote to Local
rsync [option]… [user@]host::Source [Dest]
rsync [option]… [user@]host:SourceDest
rsync [option]… rsync://[user@]host[:PORT]/Source [Dest]rsync is a program that behaves in much the same way that rcp does, but has many more options and uses the rsync remote-update protocol to greatly speed up file transfers when the destination file already exists.
The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network link, using an efficient checksum-search algorithm described in the technical report that accompanies this package.
Some of the additional features of rsync are:
# support for copying links, devices, owners, groups and permissions
# exclude and exclude-from options similar to GNU tar
# a CVS exclude mode for ignoring the same files that CVS would ignore
# can use any transparent remote shell, including rsh or ssh
# does not require root privileges
# pipelining of file transfers to minimize latency costs
# support for anonymous or authenticated rsync servers (ideal for mirroring)
There are six different ways of using rsync. They are:
# for copying local files. This is invoked when neither source nor destination path contains a : separator
# for copying from the local machine to a remote machine using a remote shell program as the transport (such as rsh or ssh).
This is invoked when the destination path contains a single : separator.
# for copying from a remote machine to the local machine using a remote shell program. This is invoked when the source contains a : separator.
# for copying from a remote rsync server to the local machine. This is invoked when the source path contains a :: separator or a rsync:// URL.
# for copying from the local machine to a remote rsync server. This is invoked when the destination path contains a :: separator.
# for listing files on a remote machine. This is done the same way as rsync transfers except that you leave off the local destination.
Note that in all cases (other than listing) at least one of the source and destination paths must be local.
You use rsync in the same way you use rcp.
You must specify a source and a destination, one of which may be remote.
Perhaps the best way to explain the syntax is some examples:
rsync *.c foo:src/
this would transfer all files matching the pattern *.c from the current directory
to the directory src on the machine foo.
If any of the files already exist on the remote system then the
rsync remote-update protocol is used to update the file by sending only the differences.
See the tech report for details.
rsync -avz foo:src/bar /data/tmp
this would recursively transfer all files from the directory src/bar
on the machine foo into the /data/tmp/bar directory on the local machine.
The files are transferred in "archive" mode, which ensures that symbolic links,
devices, attributes, permissions, ownerships etc are preserved in the transfer.
Additionally, compression will be used to reduce the size of data portions of the transfer.
rsync -avz foo:src/bar/ /data/tmp
a trailing slash on the source changes this behavior to transfer all files
from the directory src/bar on the machine foo into the /data/tmp/.
A trailing / on a source name means "copy the contents of this directory".
Without a trailing slash it means "copy the directory".
This difference becomes particularly important when using the –delete option.
You can also use rsync in local-only mode, where both the source and destination
don&qt;&qt;t have a &qt;&qt;:&qt;&qt; in the name.
In this case it behaves like an improved copy command.
this would list all the anonymous rsync modules available on
the host somehost.mydomain.com. (See the following section for more details.)
CONNECTING TO AN RSYNC SERVER
It is also possible to use rsync without using rsh or ssh as the transport.
In this case you will connect to a remote rsync server running on TCP port 873.
You may establish the connection via a web proxy by setting the environment variable
RSYNC_PROXY to a hostname:port pair pointing to your web proxy.
Note that your web proxy&qt;&qt;s configuration must allow proxying to port 873.
Using rsync in this way is the same as using it with rsh or ssh except that:
# you use a double colon :: instead of a single colon to separate the hostname
from the path.
# the remote server may print a message of the day when you connect.
# if you specify no path name on the remote server then the list of accessible
paths on the server will be shown.
# if you specify no local destination then a listing of the specified files on
the remote server is provided.
Some paths on the remote server may require authentication.
If so then you will receive a password prompt when you connect.
You can avoid the password prompt by setting the environment variable
RSYNC_PASSWORD to the password you want to use or using the –password-file option.
This may be useful when scripting rsync.
WARNING: On some systems environment variables are visible to all users.
On those systems using –password-file is recommended.
RUNNING AN RSYNC SERVER
An rsync server is configured using a config file which by default is
called /etc/rsyncd.conf. Please see the rsyncd.conf(5) man page for more information.
To Backup the home directory using a cron job:
rsync -Cavz . ss64:backup
Run the above over a PPP link to a duplicate directory on machine "ss64".
To synchronize samba source trees use the following Makefile targets:
rsync -avuzb –exclude &qt;&qt;*~&qt;&qt; samba:samba/ .
rsync -Cavuzb . samba:samba/
sync: get put
this allows me to sync with a CVS directory at the other end of the link.
I then do cvs operations on the remote machine, which saves a lot of time
as the remote cvs protocol isn&qt;&qt;t very efficient.
I mirror a directory between my "old" and "new" ftp sites with the command
rsync -az -e ssh –delete ~ftp/pub/samba/ nimbus:"~ftp/pub/tridge/samba"
this is launched from cron every few hours.
Here is a short summary of the options available in rsync.
Please refer to the FULL List of OPTIONS for a complete description.
What to copy:
-r, –recursive recurse into directories
-R, –relative use relative path names
–exclude=PATTERN exclude files matching PATTERN
–exclude-from=FILE exclude patterns listed in FILE
-I, –ignore-times don&qt;&qt;t exclude files that match length and time
–size-only only use file size when determining if a file should be transferred
–modify-window=NUM Timestamp window (seconds) for file match (default
–include=PATTERN don&qt;&qt;t exclude files matching PATTERN
–include-from=FILE don&qt;&qt;t exclude patterns listed in FILE
How to copy it:
-n, –dry-run show what would have been transferred
-l, –links copy symlinks as symlinks
-L, –copy-links copy the referent of symlinks
–copy-unsafe-links copy links outside the source tree
–safe-links ignore links outside the destination tree
-H, –hard-links preserve hard links
-D, –devices preserve devices (root only)
-g, –group preserve group
-o, –owner preserve owner (root only)
-p, –perms preserve permissions
-t, –times preserve times
-S, –sparse handle sparse files efficiently
-x, –one-file-system don&qt;&qt;t cross filesystem boundaries
-B, –block-size=SIZE checksum blocking size (default 700)
-e, –rsh=COMMAND specify rsh replacement
–rsync-path=PATH specify path to rsync on the remote machine
–numeric-ids don&qt;&qt;t map uid/gid values by user/group name
–timeout=TIME set IO timeout in seconds
-W, –whole-file copy whole files, no incremental checks
-a, –archive archive mode
-b, –backup make backups (default ~ suffix)
–backup-dir make backups into this directory
–suffix=SUFFIX override backup suffix
-z, –compress compress file data
-c, –checksum always checksum
-C, –cvs-exclude auto ignore files in the same way CVS does
–existing only update files that already exist
–delete delete files that don&qt;&qt;t exist on the sending side
–delete-excluded also delete excluded files on the receiving side
–delete-after delete after transferring, not before
–force force deletion of directories even if not empty
–ignore-errors delete even if there are IO errors
–max-delete=NUM don&qt;&qt;t delete more than NUM files
–log-format=FORMAT log file transfers using specified format
–partial keep partially transferred files
–progress show progress during transfer
-P equivalent to –partial –progress
–stats give some file transfer stats
-T –temp-dir=DIR create temporary files in directory DIR
–compare-dest=DIR also compare destination files relative to DIR
-u, –update update only (don&qt;&qt;t overwrite newer files)
–address=ADDRESS bind to the specified address
–blocking-io use blocking IO for the remote shell
–bwlimit=KBPS limit I/O bandwidth, KBytes per second
–config=FILE specify alternate rsyncd.conf file
–daemon run as a rsync daemon
–no-detach do not detach from the parent
–password-file=FILE get password from FILE
–port=PORT specify alternate rsyncd port number
-f, –read-batch=FILE read batch file
-F, –write-batch write batch file
–version print version number
-v, –verbose increase verbosity
-q, –quiet decrease verbosity
-h, –help show this help screen
Tips on how to use each of the options above can be found in the
FULL List of OPTIONS and Exit Values
The exclude and include patterns specified to rsync allow for flexible selection of
which files to transfer and which files to skip.
rsync builds an ordered list of include/exclude options as specified on the
command line. When a filename is encountered, rsync checks the name against each
exclude/include pattern in turn. The first matching pattern is acted on.
If it is an exclude pattern, then that file is skipped.
If it is an include pattern then that filename is not skipped.
If no matching include/exclude pattern is found then the filename is not skipped.
Note that when used with -r (which is implied by -a), every subcomponent of
every path is visited from top down, so include/exclude patterns get applied
recursively to each subcomponent.
Note also that the –include and –exclude options take one pattern each.
To add multiple patterns use the –include-from and –exclude-from options
or multiple –include and –exclude options.
The patterns can take several forms. The rules are:
# if the pattern starts with a / then it is matched against the start of the filename,
otherwise it is matched against the end of the filename.
Thus "/foo" would match a file called "foo" at the base of the tree.
On the other hand, "foo" would match any file called "foo" anywhere in the tree
because the algorithm is applied recursively from top down; it behaves as if each
path component gets a turn at being the end of the file name.
# if the pattern ends with a / then it will only match a directory, not a file,
link or device.
# if the pattern contains a wildcard character from the set *?[ then expression
matching is applied using the shell filename matching rules.
Otherwise a simple string match is used.
# if the pattern includes a double asterisk "**" then all wildcards in the pattern
will match slashes, otherwise they will stop at slashes.
# if the pattern contains a / (not counting a trailing /) then it is matched
against the full filename, including any leading directory.
If the pattern doesn&qt;&qt;t contain a / then it is matched only against the final
component of the filename. Again, remember that the algorithm is applied recursively
so "full filename" can actually be any portion of a path.
# if the pattern starts with "+ " (a plus followed by a space) then it is always
considered an include pattern, even if specified as part of an exclude option.
The "+ " part is discarded before matching.
# if the pattern starts with "- " (a minus followed by a space) then it is always
considered an exclude pattern, even if specified as part of an include option.
The "- " part is discarded before matching.
# if the pattern is a single exclamation mark ! then the current include/exclude list
is reset, removing all previously defined patterns.
The +/- rules are most useful in exclude lists, allowing you to have a single
exclude list that contains both include and exclude options.
If you end an exclude list with –exclude &qt;&qt;*&qt;&qt;, note that since the algorithm is applied recursively that unless you explicitly include parent directories of files you want to include then the algorithm will stop at the parent directories and never see the files below them. To include all directories, use –include &qt;&qt;*/&qt;&qt; before the –exclude &qt;&qt;*&qt;&qt;.
Here are some exclude/include examples:
# –exclude "*.o" would exclude all filenames matching *.o
# –exclude "/foo" would exclude a file in the base directory called foo
# –exclude "foo/" would excl
ude any directory called foo.
# –exclude "/foo/*/bar" would exclude any file called bar two levels below a
base directory called foo.
# –exclude "/foo/**/bar" would exclude any file called bar two or more levels below
a base directory called foo.
# –include "*/" –include "*.c" –exclude "*"
would include all directories
and C source files
# –include "foo/" –include "foo/bar.c" –exclude "*"
would include only foo/bar.c (the foo/ directory must be
explicitly included or it would be excluded by the "*")
The following call generates 4 files that encapsulate the information for
synchronizing the contents of target_dir with the updates found in src_dir
$ rsync -F [other rsync options here]
The generated files are labeled with a common timestamp:
# rsync_argvs. command-line arguments
# rsync_flist. rsync internal file metadata
# rsync_csums. rsync checksums
# rsync_delta. data blocks for file update & change
See http://www.ils.unc.edu/i2dsi/unc_rsync+.html for papers and technical reports.
Three basic behaviours are possible when rsync encounters a symbolic link in
the source directory.
By default, symbolic links are not transferred at all.
A message "skipping non-regular" file is emitted for any symlinks that exist.
If –links is specified, then symlinks are recreated with the same target
on the destination. Note that –archive implies –links.
If –copy-links is specified, then symlinks are "collapsed" by copying their referent,
rather than the symlink.
rsync also distinguishes "safe" and "unsafe" symbolic links.
An example where this might be used is a web site mirror that wishes ensure the
rsync module they copy does not include symbolic links to /etc/passwd in the public
section of the site. Using –copy-unsafe-links will cause any links to be copied
as the file they point to on the destination.
Using –safe-links will cause unsafe links to be ommitted altogether.
rsync occasionally produces error messages that may seem a little cryptic.
The one that seems to cause the most confusion is
"protocol version mismatch – is your shell clean?".
This message is usually caused by your startup scripts or remote shell facility
producing unwanted garbage on the stream that rsync is using for its transport.
The way to diagnose this problem is to run your remote shell like this:
rsh remotehost /bin/true > out.dat
then look at out.dat. If everything is working correctly then out.dat should be
a zero length file. If you are getting the above error from rsync then you will
probably find that out.dat contains some text or data.
Look at the contents and try to work out what is producing it.
The most common cause is incorrectly configured shell startup scripts
(such as .cshrc or .profile) that contain output statements for non-interactive logins.
If you are having trouble debugging include and exclude patterns,
then try specifying the -vv option.
At this level of verbosity rsync will show why each individual file is included or
See the file README for installation instructions.
Once installed you can use rsync to any machine that you can use rsh to.
rsync uses rsh for its communications, unless both the source and destination are local.
You can also specify an alternative to rsh, either by using the -e command line
option, or by setting the RSYNC_RSH environment variable.
One common substitute is to use ssh, which offers a high degree of security.
Note that rsync must be installed on both the source and destination machines.
The CVSIGNORE environment variable supplements any ignore patterns in .cvsignore files.
See the –cvs-exclude option for more details.
The RSYNC_RSH environment variable allows you to override the default shell used as
the transport for rsync. This can be used instead of the -e option.
The RSYNC_PROXY environment variable allows you to redirect your rsync client to
use a web proxy when connecting to a rsync daemon.
You should set RSYNC_PROXY to a hostname:port pair.
Setting RSYNC_PASSWORD to the required password allows you to run authenticated
rsync connections to a rsync daemon without user intervention.
Note that this does not supply a password to a shell transport such as ssh.
USER or LOGNAME
The USER or LOGNAME environment variables are used to determine the default
username sent to a rsync server.
The HOME environment variable is used to find the user&qt;&qt;s default .cvsignore file.
Routing is the process by which an item gets from one location to another. Many items get routed: for example, mail, telephone calls, and trains. In networking, a router is the device used to route traffic. The routing information a router learns from its routing sources is placed in its routing table. The router will rely on this table to tell it which port to use when forwarding addressed packets
Types of Routes
- Static routes – Routes learned by the router when an administrator manually establishes the route. The administrator must manually update this static route entry whenever an internet work topology change requires an update.
- Dynamic Routes – Routes dynamically learned by the router after an administrator configures a routing protocol that helps determine routes. Unlike static routes, once the network administrator enables dynamic routing, route knowledge is automatically updated by a routing process whenever new topology information is received from the internetwork.
Static Route Configuration
A static route allows manual configuration of the routing table. No dynamic changes to this table entry will occur as long as the path is active. The ip route command is used to configure a static route in global configuration mode.
A default route is a special type of static route. A default route is a route to use for situations when the route from a source to a destination is not known or when it is unfeasible for the routing table to store sufficient information about the route.
Routing protocols are used between routers to determine paths and maintain routing tables. Dynamic routing relies on a routing protocol to disseminate knowledge.
Characteristics of Routing Protocols
A routing protocol defines the set of rules used by a router when it communicates with neighboring routers. It interprets information in a network layer address to allow a packet to be forwarded to the destination network.
Routing protocols describe:
- How updates are conveyed
- What knowledge is conveyed When to convey knowledge
- How to locate recipients of the updates
Two examples of routing protocols are Routing Information Protocol (RIP) and Interior Gateway Routing Protocol (IGRP).
An autonomous system is a collection of networks under a common administrative domain.
There are two major types of routing protocols used to connect autonomous systems:
- Interior Gateway Protocols (IGP) – Routing Protocols used to exchange routing information within an autonomous system. RIP and IGRP are examples of IGPs.
- Exterior Gateway Protocols (EGP) – used to connect between autonomous systems. Border Gateway Protocol (BGP) is an example of an EGP.
Ranking Routes with Adminstrative Distance
Multiple routing protocols and static routes may be used at the same time. If there are several sources for routing information, an administrative distance value is used to rate the trustworthiness of each routing information source. An Administrative Distance is a rating of the trustworthiness of a routing information source, such as an individual router or a group of routers. It is an integer from 0 to 255. Specifying administrative distance values enables the Cisco IOS software to discriminate between sources of routing information. For each destination learned, the IOS always places in the routing table the route from the source with the lowest administrative distance. In general, a routing protocol with a lower administrative distance has a higher likelihood of being used.
Classes of Routing Protocols
Within an autonomous system, most IGP routing algorithms can be classified as conforming to one of three algorithms.
Distance Vector The distance vector routing approach determines the direction (vector) and distance to any link in the internetwork.
Link State The link-state (also called shortest path first) approach re-creates the exact topology of the entire internetwork (or at least the partition in which the router is situated).
Balanced Hybrid A balanced hybrid approach combines aspects of the link-state and distance vector algorithms.
An example of Distance vector protocol is Routing Information protocol(RIP).
Engineers have implemented this link-state concept in Open Shortest Path First (OSPF) routing. An example of a balanced hybrid protocol is Cisco&qt;s Enhanced Interior Gateway Routing Protocol (Enhanced IGRP).
Distance Vector Routing Problems
- Distance vector routing protocols maintain routing information by updating routing tables with neighboring routing tables.
- A routing loop is a route where packets never reach their destination, but cycle repeatedly through a series of nodes.
- Defining a maximum routing count prevents infinite loops by defining a limit on the number of hops.
- Split horizon is a technique for solving routing loops that implements not sending information about a route back in the same direction from which it came.
- Route poisoning is a solution to loops in which routers set the distance of routes that have gone down to infinity to make that route unreachable.
- A triggered update is a new routing table that is sent immediately in response to some change. Each receiving router sends a triggered update which creates a wave that propagates across the network.
- Hold-down timers are used to prevent regular update messages from inappropriately reinstating a route that may have gone bad.
- Solutions involving multiple techniques can be implemented on networks with multiple routes.
Discovering Neighbors with CDP
Cisco Discovery Protocol (CDP) is an information gathering tool used by network administrators to get information about directly connected devices. CDP is a proprietary tool that enables network administrators to access a summary of protocol and address information about other devices that are directly connected to the device initiating the command. CDP runs over the data link layer connecting the physical media to the upper-layer protocols. Because CDP operates at this level, two or more CDP devices that support different network-layer protocols (for example, IP and Novell IPX) can learn about each other. Physical media supporting the Subnetwork Access Protocol (SNAP) encapsulation connect CDP devices. These can include all LANs, Frame Relay and other WANs, and ATM networks. A CDP packet can be as small as 80 octets, mostly made up of ASCII strings that represent information such a CDP interfaces, neighbor entries, statistics, etc.
The network administrator uses a show command to display information about the networks directly connected to the switch.
CDP Summary Information
Packets formed by CDP provide the following information about each CDP neighbor device:
- Device identifiers – For example, the switch&qt;s configured name and domain name (if any).
- Address list – Up to one address for each protocol supported.
- Port identifier – The name of the local and remote port (in the form of an ASCII character string such as ethernet0).
- Capabilities list – Supported features, for example, the device acts as a source-route bridge as well as a router.
- Platform – The device&qt;s hardware platform: for example, Cisco 7000.