Skip to main content

Push Technology



Push Technology

Push technology, or server push, is a style of Internet-based communication where the request for a given transaction originates with the publisher or the server. It is in contrast with pull technology, where the request for the transmission of information originates with the receiver or the client.

General use

Push services are often based on information preferences expressed in advance. This is known as a publish/subscribe model. A client might "subscribe" to various information "channels". Whenever new content is available on one of those channels, the server would push that information out to the user.

Synchronous conferencing and instant messaging are typical examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs and centralized programs allow pushing files, this means the sender initiates the data transfer rather than the recipient.

Email is also a push system: the SMTP protocol on which it is based is a push protocol. However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The original BlackBerry was the first popular example of push technology in a wireless context.
Another popular type of Internet push technology was PointCast Network, which gained popularity in the 1990s. It delivered news and stock market data. Other uses are push enabled web applications including market data distribution (stock tickers), online chat/messaging systems (web chat), auctions, online betting and gaming, sport results, monitoring consoles and sensor network monitoring.

source : wikipedia


Web Security Basics

There are several key Concern related to web security.how secure are the systems that control the exchange of information on the web ? how secure is the information stored on the numerous computer across the web?. it is known fact the what can be used can also be misused. we should always remember that if organizational information is hacked either through the network or through other the other means ,it could incur heavy cost to the compay . A failure in the network security could also cost the organization in terms of goodwill and reputation.No other organization would be interested in doing busiiness with that organization .that can't protect its information and security system.
Following points need to be keep track:

1. Login Page Should Be Encrypted : The number of times I have seen Web sites that only use SSL (with https: URL schemes) after user authentication is accomplished is really dismaying. Encrypting the session after login may be useful — like locking the barn door so the horses don’t get out — but failing to encrypt logins is a bit like leaving the key in the lock when you’re done locking the barn door. Even if your login form POSTs to an encrypted resource, in many cases this can be circumvented by a malicious security cracker who crafts his own login form to access the same resource and give him access to sensitive data.

2. Data Validation Should Be Done Server Side:Many Web forms include some JavaScript data validation. If this validation includes anything meant to provide improved security, that validation means almost nothing. A malicious security cracker can craft a form of his own that accesses the resource at the other end of the Web page’s form action that doesn’t include any validation at all. Worse yet, many cases of JavaScript form validation can be circumvented simply by deactivating JavaScript in the browser or using a Web browser that doesn’t support JavaScript at all. In some cases, I’ve even seen login pages where the password validation is done client-side — which either exposes the passwords to the end user via the ability to view page source or, at best, allows the end user to alter the form so that it always reports successful validation. Don’t let your Web site security be a victim of client-side data validation. Server-side validation does not fall prey to the shortcomings of client-side validation because a malicious security cracker must already have gained access to the server to be able to compromise it.er-side:

3.Manage your Web site via encrypted connections:Using unencrypted connections (or even connections using only weak encryption), such as unencrypted FTP or HTTP for Web site or Web server management, opens you up to man-in-the-middle attacks and login/password sniffing. Always use encrypted protocols such as SSH to access secure resources, using verifiably secure tools such as OpenSSH. Once someone has intercepted your login and password information, that person can do anything you could have done.

4.Use strong, cross-platform compatible encryption:Believe it or not, SSL is not the top-of-the-line technology for Web site encryption any longer. Look into TLS, which stands for Transport Layer Security — the successor to Secure Socket Layer encryption. Make sure any encryption solution you choose doesn’t unnecessarily limit your user base, the way proprietary platform-specific technologies might, as this can lead to resistance to use of secure encryption for Web site access. The same principles also apply to back-end management, where cross-platform-compatible strong encryption such as SSH is usually preferable to platform-specific, weaker encryption tools such as Windows Remote Desktop.

5.Connect from a secured network:Avoid connecting from networks with unknown or uncertain security characteristics or from those with known poor security such as open wireless access points in coffee shops. This is especially important whenever you must log in to the server or Web site for administrative purposes or otherwise access secure resources. If you must access the Web site or Web server when connected to an unsecured network, use a secure proxy so that your connection to the secure resource comes from a proxy on a secured network. In previous articles, I have addressed how to set up a quick and easy secure proxy using either an OpenSSH secure proxy or a PuTTY secure proxy.

6.Prefer key-based authentication over password authentication: Password authentication is more easily cracked than cryptographic key-based authentication. The purpose of a password is to make it easier to remember the login credentials needed to access a secure resource — but if you use key-based authentication and only copy the key to predefined, authorized systems (or better yet, to separate media kept apart from the authorized system until it’s needed), you will use a stronger authentication credential that’s more difficult to crack.

7.Maintain a secure workstation:If you connect to a secure resource from a client system that you can’t guarantee with complete confidence is secure, you cannot guarantee someone isn’t “listening in” on everything you’re doing. Keyloggers, compromised network encryption clients, and other tricks of the malicious security cracker’s trade can all allow someone unauthorized access to sensitive data regardless of all the secured networks, encrypted communications, and other networking protections you employ. Integrity auditing may be the only way to be sure, with any certainty, that your workstation has not been compromised.

In order for anyone to attack your web site, there has to be a way in - an unguarded doorway into your server.

There are only three places where these doorways exist:

1. On your local computer that you use to upload your web pages.
2. Through any form field that you use on your web site to collect information from your
visitors.
3. At the physical location where your server is located.


With virus and worm attacks there also has to be a doorway in, but the doorway exists primarily in the software your web site visitors use to view your site (Microsoft Internet Explorer), not in the site itself.

You can guard against the virus replicating itself on your mail server or the worm being downloaded to your local network or computer by using firewall software, but responding to virus and worm attacks consists mainly of educating yourself and your co-workers to prevent more infections and waiting for Microsoft to patch their software.

On the other hand, if you're running your own server, you need to stay on top of potential new security threats from viruses and worms daily to close any potential doorways into your server (covered in server security). The good news is that if you're just programming a web site (the majority of police webmasters), there are many straight forward and relatively simple methods you can use to protect your site and close the doorways in.
Source :http://blogs.techrepublic.com.com/security/?p=424

Comments

Popular posts from this blog

Advantages and Disadvantages of EIS Advantages of EIS Easy for upper-level executives to use, extensive computer experience is not required in operations Provides timely delivery of company summary information Information that is provided is better understood Filters data for management Improves to tracking information Offers efficiency to decision makers Disadvantages of EIS System dependent Limited functionality, by design Information overload for some managers Benefits hard to quantify High implementation costs System may become slow, large, and hard to manage Need good internal processes for data management May lead to less reliable and less secure data

Inter-Organizational Value Chain

The value chain of   a company is part of over all value chain. The over all competitive advantage of an organization is not just dependent on the quality and efficiency of the company and quality of products but also upon the that of its suppliers and wholesalers and retailers it may use. The analysis of overall supply chain is called the value system. Different parts of the value chain 1.  Supplier     2.  Firm       3.   Channel 4 .   Buyer

Big-M Method and Two-Phase Method

Big-M Method The Big-M method of handling instances with artificial  variables is the “commonsense approach”. Essentially, the notion is to make the artificial variables, through their coefficients in the objective function, so costly or unprofitable that any feasible solution to the real problem would be preferred, unless the original instance possessed no feasible solutions at all. But this means that we need to assign, in the objective function, coefficients to the artificial variables that are either very small (maximization problem) or very large (minimization problem); whatever this value,let us call it Big M . In fact, this notion is an old trick in optimization in general; we  simply associate a penalty value with variables that we do not want to be part of an ultimate solution(unless such an outcome is unavoidable). Indeed, the penalty is so costly that unless any of the  respective variables' inclusion is warranted algorithmically, such variables will ...