The Independent Computing Architecture (ICA) protocol is a presentation
layer protocol on the Open System Interconnection (OSI) model and is the
engine by which ICA clients and MetaFrame (MF) servers communicate
data.
--------------------------------------------------------------------------------
SpeedScreen
SpeedScreen works by transmitting only the part of the screen that has
changed. For example, assume that you hovered your mouse in the lowerright
corner of the screen over the clock. SpeedScreen then compares your
mouse movements to the last screen refresh it sent you and determines that
only the lower-right corner of the screen has changed and thus refreshes just
that part of the screen instead of resending the entire screen.
SpeedScreen Latency Reduction is the name given to SpeedScreen features:
Local Text Echo and Mouse Click Feedback.
--------------------------------------------------------------------------------
Data Store and LHC
Prior to MetaFrame Presentation Server 3.0, licensing information was also
stored in the Data Store. With the advent of MetaFrame Presentation Server
(MPS) 3.0, licensing is stored on the Citrix Licensing Server.
All the information stored in the IMA Data Store is manipulated through the
Management Console.
Local Host Cache (LHC) is an Access database that is located on every
MetaFrame Presentation Server and that holds a smaller version of the Data
Store. It carries enough information to keep the server running in the
event that the main Data Store should become unavailable for any reason.
The Local Host Cache is located in C:\Program Files\Citrix\Independent
Management Architecture\IMALHC.MDB. As changes are made to the IMA
Data Store, the MetaFrame servers are notified of this change, and they, in
turn, update or refresh their Local Host Cache database with the updated
information. The LHC contains information about published applications in
the farm and the servers that host them.
--------------------------------------------------------------------------------
Zones
Zones provide a way of grouping geographically close servers to save network
bandwidth and also improve performance. Every zone elects one Data
Collector, which every server in that zone reports to. If the servers are in the
same zone but are geographically very dispersed, significant network bandwidth
is constantly used because the servers constantly talk with the DC and
vice versa. This is why it is recommended that you group your servers in
zones based on their location.
Data Collectors
Data Collectors (DCs) are responsible for keeping zone-specific information.
Every zone has one elected server that acts as the DC and maintains
information gathered from all the servers in that zone, information such as
server user load and active and disconnected sessions. Every MPS server in
the zone will notify the DC of its changes every 60 seconds.
Zone-to-Zone DC Communications
Prior to MetaFrame Presentation Server 3.0, every Data Collector in every zone
communicated its information to other Data Collectors in other zones. With MPS
3.0, this capability has been disabled by default to preserve network bandwidth.
This change was prompted because large organizations suffered network bandwidth
problems due to the constant replication of information between DCs in different
zones. This change, however, comes at a cost. If a user now wants to connect to
an application that is located outside his or her primary zone, the DC for that
user’s zone needs to request information from the DC in the other zones, and as
such, application launch times may be delayed a bit.
To get around this delay, whenever you have more than one zone, you should
configure the Zone preference and failover policy in the Policies node.
--------------------------------------------------------------------------------
Listener Ports / Idle Sessions (Windows 2000 only)
Large organizations that have users in the thousands are advised to add more
idle sessions to cope with heavy logon attempts. You can do this by editing the
Registry in the following location:
HKLM\system\currentcontrolset\control\terminal server
idlewinstationpoolcount
Modify the value accordingly. It is recommended that you add these idle sessions
in multiplies of two.
For performance reasons, it is recommended that you do not exceed a total 10
idle sessions. The more idle sessions you create, the more memory and other
server resources are consumed.
--------------------------------------------------------------------------------
To manually trigger an DC election use the command:
querydc -e
--------------------------------------------------------------------------------
Back in the MetaFrame 1.8 days, whenever an ICA client requested information
or queried the farm for published applications, it broadcasted a message
via UDP port 1604. A Master ICA Browser residing in the same subnet
as the client requesting the information responded to the request. Now if
ICA client computer that is broadcasting the request does not have a browser
gateway configured, it can view only information that the Master ICA
Browser in its same subnet carries, thereby getting only a partial listing.
Beginning with MetaFrame XP 1.0 and later, Citrix solved this problem by
storing all the information in the IMA Data Store and then replicating it to
the Local Host Cache on every MF server. It also eliminated the need for the
UDP broadcast and replaced it with IMA. Now when an ICA client queries
any server, a full list of published applications is provided.
--------------------------------------------------------------------------------
Terminal Services Licensing does not have to be installed on a DC but it is
a recommended practice to do so.
--------------------------------------------------------------------------------
Data Store Connections: Direct Mode or Indirect Mode
A "direct" data store connection is when SQL, Oracle, or DB/2 is used to host
the data store. An "indirect" data store connection is when you install the
data store on the first Presentation Server using MS Access or MSDE. The
connection is considered indirect because all MPS servers have to connect to
the MPS server hosting the database first, then connect to the data store after
first connecting to the MPS server that hosts the data store.
--------------------------------------------------------------------------------
Two non-administrator MASL licenses are given for 96 hours after the MASL server
is first installed, this is called a Startup Grace Period (unlimited
administrative connections are allowed). This period of time differs from a
Failover Grace Period (loss of communications between a MPS and a MASL server)
which is 30 days. The existing documentation available with MPS 3.0 still states
that the grace period is only 96 hours (4 days). If the license file was
downloaded after August 19, 2004, it allows for the new 30-day grace period.
Startup Grace Period...: 96 hours (4 days)
Failover Grace Period..: 30 days
MASL server, illustrating the two internal services
(daemons) that combine to deliver the license server’s functionality. These
services are called the License Manager Daemon (LMGRD.EXE) and the Citrix
Vendor Daemon (CITRIX.EXE).
Two distinct types of licensing activity take place in an MPS 3.0 environment:
Initial server connection phase
Occurs when the MetaFrame Presentation Server initially boots up.
Client Access License retrieval
Occurs when a client device connects to a MetaFrame server.
The MASL server address is stored in the data store. On MPS bootstrap, this
address is obtained from the data store which the MPS then queries the License
Manager Daemon for the port on which the Citrix Vendor Daemon is running on
(this port is choosen at random during startup of MASL). The connection between
the MPS and the Citrix Vendor Daemon is persistant as long as the MPS server is
running. The MPS checks out a startup license from the MASL server which is a
requirement before an MPS server can check out a Client Access License.
The local MPS server stores a local copy of the license information (LHC) and
once per hour the licensing information is updated to reflect any changes in
licensing. If connectivity is lost from the MASL server, the MPS server begins
it's failover grace period (30 days). If 100 CALs were available when the MASL
server went offline, each server could accept 100 connections during the
failover grace period, no pooling of licenses occurs during the failover period.
MASL TCP Ports:
License Manager Daemon: TCP 27000
Citrix Vendor Daemon: Randomly choosen during startup
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------