Distributed Applications

Get Started. It's Free
or sign up with your email address
Rocket clouds
Distributed Applications by Mind Map: Distributed Applications

1. 3. Architecture of DSs

1.1. System Models

1.1.1. Architectural Model

1.1.1.1. Software Layers

1.1.1.1.1. 1. applications, services

1.1.1.1.2. 2. middleware

1.1.1.1.3. 3. operating system

1.1.1.1.4. 4. computer and network devices

1.1.1.2. System Architectures

1.1.1.2.1. client-server model

1.1.1.2.2. proxy-servers

1.1.1.2.3. peer processes

1.1.1.2.4. community of software agents

1.1.2. Failure Model

1.1.2.1. crash faults

1.1.2.1.1. process simply stops

1.1.2.1.2. Reasons: Hardware failures or Software errors

1.1.2.2. message los

1.1.2.2.1. buffer overflow of routers

1.1.2.2.2. network congestion

1.1.2.3. fail stop failures

1.1.2.3.1. same as crash faults

1.1.2.3.2. but dependent systems get notified

1.1.2.4. timing failures

1.1.2.4.1. a local clock drifts to much from real time

1.1.2.4.2. transmission takes too long

1.1.2.5. arbitrary/non-malicious Byzantine failures

1.1.2.5.1. a process arbitrarily omits intended processing steps

1.1.2.5.2. a process takes unintended processing steps

1.1.2.5.3. a process sends corrupted messages

1.1.2.6. malicious Byzantine failures

1.1.2.6.1. attacker knowing the system trys to break it

1.1.2.6.2. corruption or replay of messages

1.1.2.6.3. modification of the program

1.1.3. Security Model

1.1.3.1. secure communication

1.1.3.1.1. through cryptography

1.1.3.2. access conrol

1.1.3.2.1. protect objects against unauthorized access

1.1.3.3. authentication

1.1.3.3.1. proving identities of senders

1.2. Transparency

1.2.1. Location

1.2.1.1. Problem: location of resource or service in a DS

1.2.1.2. Solution: naming of objects and nameservices

1.2.2. Access

1.2.2.1. Treat local and remote objects the same way

1.2.3. Replication

1.2.3.1. Treat copies of resources as if they were one (as seen from outside)

1.2.3.2. Useful if redundancy in the backend is necessary for faster access

1.2.4. Migration

1.2.4.1. dependent processes should not feel changing of object locations

1.2.4.2. Host Migration

1.2.4.2.1. Migration from computers (e.g. laptops) within a network should not play a role. network services for the computer stay available

1.2.5. Language

1.2.5.1. components in different programming languages can interact with each other without knowing, which language the other component is based on.

1.2.6. Other

1.2.6.1. Failure

1.2.6.1.1. mask failures

1.2.6.2. Concurrency

1.2.6.2.1. e.g. several users can access one object at the same time without interfering

1.2.6.3. Execution

1.2.6.3.1. process may be processed on different runtime systems

1.2.6.4. Performance

1.2.6.4.1. allows dynamic reconfiguration to improve perfomance spontaneously

1.2.6.5. Scalability

1.3. Paradigms for DAs

1.3.1. Information Sharing

1.3.1.1. communication through e.g. shared memory

1.3.2. Message Exchange

1.3.3. Naming Entities (Name Resolution)

1.3.4. Bidirectional Communication

1.3.4.1. Sockets

1.3.4.1.1. apllication created

1.3.4.1.2. os-controlled

1.3.4.1.3. interface

1.3.4.1.4. for sending/receiving messages as stream

1.3.4.2. Call Semantics

1.3.4.2.1. request and answer messages can be lost

1.3.4.2.2. sender and receiver can crash

1.3.4.2.3. How often is a requested service operation processed?

1.3.4.2.4. at-least-once

1.3.4.2.5. exactly-once

1.3.4.2.6. last

1.3.4.2.7. at-most-once

1.3.5. Producer-consumer interaction

1.3.5.1. producer keeps processing directly after invocation of consumer (fire&forget)

1.3.5.2. kind of asynchronous procedure call

1.3.6. Client-Server-Model

1.3.7. Peer-to-Peer-Model

1.3.7.1. direct communication

1.3.7.2. clients=servers

1.3.7.3. interactive cooperation for a DA

1.3.7.4. Issues

1.3.7.4.1. Peer discovery and group management

1.3.7.4.2. Data location and placement

1.3.7.4.3. Reliable and efficient file exchange

1.3.7.4.4. Security/privacy/anonymity/trust

1.3.8. Message serialization

1.3.8.1. One sender

1.3.8.1.1. receiver sequence

1.3.8.1.2. sequence numbers from sender

1.3.8.2. Several senders

1.3.8.2.1. no serialization

1.3.8.2.2. loosely synchronous

1.3.8.2.3. virtually synchronous

1.3.8.2.4. totally ordered

1.4. Client-Server-Model

1.4.1. Definitions

1.4.1.1. Sender, Receiver

1.4.1.1.1. pure message exchanging entities

1.4.1.2. Client, Server

1.4.1.2.1. entities acting in some specialized protocol

1.4.2. Client

1.4.2.1. Process, which starts requests to servers

1.4.2.2. Process, which runs on a client machine

1.4.3. Service

1.4.3.1. Piece of software, that provides a set of information services to clients

1.4.4. Server

1.4.4.1. Machine, on which services run on

1.4.4.2. Machine, that can always process requests from clients

1.4.5. Services

1.4.5.1. File-Service

1.4.5.1.1. centralized data storage facilities to clients distributed among a network

1.4.5.2. Time Service

1.4.5.3. Name-Service

1.4.5.3.1. e.g. DNS

1.4.6. Client-Server-Interface

1.4.6.1. Client-Interface

1.4.6.1.1. import interface

1.4.6.1.2. represents server on the client

1.4.6.1.3. prepares parameters for sending and creates and sends messages to the server

1.4.6.1.4. receives results and prepares them for further processing on the client

1.4.6.2. Server-Interface

1.4.6.2.1. export interface

1.4.6.2.2. represents all possible clients on the server

1.4.6.2.3. accepts requests from the clients and prepares its results for processing of the service

1.4.6.2.4. invokes the service with the request parameters

1.4.6.2.5. prepares and sends answer message with the result

1.4.7. Multitier-Architecture

1.4.7.1. components in between client and server

1.4.7.2. behaves like server and client at the same time

1.4.8. Single vs. multiple parallel sever processes

1.4.8.1. Single dedicated server process

1.4.8.2. Cloning of new server processes

1.4.9. Stateful vs. Stateless

1.4.9.1. Stateless

1.4.9.1.1. Client needs to hold session data if necessary

1.4.9.1.2. crashes for server not problematic

1.4.9.2. Stateful

1.4.9.2.1. good for multi-step actions, where server needs to hold knowledge of the client

1.4.9.2.2. higher complexity on server

1.4.9.2.3. server caches data for client

1.4.9.2.4. after crash problematic situations

1.4.10. LDAP

1.4.10.1. Steps

1.4.10.1.1. Make a connection

1.4.10.1.2. Login (="binding")

1.4.10.1.3. Make action (search, read, ...)

1.4.10.1.4. Close connection

1.4.10.2. ldif Syntax

1.4.10.2.1. looks like JSON without "{" and "}" and no deeper hierarchy

1.4.10.3. Parts

1.4.10.3.1. DN

1.4.10.3.2. RDN

1.4.10.3.3. DIT

1.4.11. Failure tolerant services

1.4.11.1. Modular redundancy

1.4.11.2. Primary-standby-approach

2. 4. Remote Invocation (RPC/RMI)

2.1. Differences RPC - local procedure call

2.1.1. different processes

2.1.2. no shared address space

2.1.3. no common runtime environment

2.1.4. client and server can have different life spans

2.1.5. communication failures have to be handled

2.2. Definition

2.2.1. Synchronous

2.2.2. defined by signature of the called procedure

2.2.3. different address spaces

2.2.4. small channels

2.3. Layer of RPCs

2.3.1. Presentation Layer

2.3.2. Between Application and Session Layer

2.3.3. Session Layer ist OS-Interface

2.3.4. Transport/TCPIP under Session

2.4. Protocols

2.4.1. R

2.4.2. RR

2.4.3. RRA

2.4.4. R=Request/Reply, A=ACK

2.5. Stubs

2.5.1. Interfaces for the programmer to hide communication-details of RPCs (transparency)

2.5.2. Client

2.5.2.1. 1. RPC method specification

2.5.2.2. 2. sending to the correct server

2.5.2.3. 3. transform parameters to transmission format

2.5.2.4. 4. blocking/unblocking the server operation

2.5.3. Server

2.5.3.1. 1. decoding of parameter values

2.5.3.2. 2. invoke procedure

2.5.3.3. 3. transform results into transmission format

2.5.3.4. 4. send them back

2.6. RPC-Generator/-Language

2.6.1. transforms interface descriptions of webservices in RPC-languages into client-/server-stubs in a normal programming language

2.6.2. e.g. wsdl to client-/server-stubs

2.7. Binding

2.7.1. Static

2.7.1.1. while developing

2.7.1.2. hardcoded within the programm

2.7.2. Semi-Static

2.7.2.1. during client-initialization

2.7.2.2. server-address stays unchanged while process

2.7.2.3. binding via

2.7.2.3.1. DB

2.7.2.3.2. broadcast-/multicast-message

2.7.2.3.3. nameservice

2.7.2.3.4. broker

2.7.3. Dynamic

2.7.3.1. exactly before each RPC

2.7.3.2. server-migration-transparency

2.7.3.3. binding to alternate servers possible

2.7.3.4. no problems with replacing broken servers

2.7.4. Broker

2.7.4.1. DB for available server interfaces

2.7.4.2. servers register their services (export interfaces)

2.7.4.3. client gets service information from broker (import interface)

2.7.4.4. information

2.7.4.4.1. yellow pages

2.7.4.4.2. white pages

2.7.4.4.3. static attributes

2.7.4.4.4. dynamic attributes

2.8. RMI (Java)

2.8.1. characteristics

2.8.1.1. RPC for Java-objects on distributed systems

2.8.1.2. Java-objects from different VMs communicate with each other

2.8.1.3. location & access transparency

2.8.1.4. objects can be passed as parameters via Serializable

2.8.1.5. Interaction takes place via interfaces

2.8.2. binding/RMI-registry

2.8.2.1. server registers objects in registry

2.8.2.2. client gets stubs from server-objects from registry

2.8.2.3. names -> objects

2.8.2.4. stand-alone java app

2.8.2.5. also remote object itself

2.8.2.6. port 1099

2.8.2.7. on all machines hosting remote objects

2.8.2.8. access via java.rmi.Naming

2.8.2.9. example procedure

2.8.2.9.1. open connection to service

2.8.2.9.2. receive a stub fromi RMI-registry

2.8.2.9.3. Registry.lookup(object-address, obj)

2.8.2.9.4. interaction with remote-object obj

2.8.3. parameter passing

2.8.3.1. local object directly passed (serialized)

2.8.3.2. remote objects only per reference=stub

2.8.4. garbage collection

2.8.4.1. via reference counter

2.8.4.2. counter has a lifespan

2.8.4.3. must be updated regularly

2.8.4.4. if counter == 0 remove it

3. 5. Basic Mechanisms for DAs

3.1. External Data Represantation

3.1.1. (Un-)Marshalling

3.1.1.1. Marshalling: serialize parameters to stream

3.1.1.2. Unmarshalling: parameter extraction out of stream

3.1.1.3. via plugins or RPC-system

3.1.2. Centralized Transformation

3.1.2.1. receiver doesn't transform data

3.1.2.2. central instance transforms in- and outcoming data for receiver

3.1.3. Decentralized Transformation

3.1.3.1. varying role allocation

3.1.4. Common external data represantation

3.1.4.1. should be

3.1.4.1.1. machine independent

3.1.4.1.2. sufficient for complex data structures

3.1.4.2. examples

3.1.4.2.1. CORBA

3.1.4.2.2. XDR (Sun)

3.1.4.2.3. Java object serialization

3.1.4.3. numbers

3.1.4.3.1. little endian

3.1.4.3.2. big endian

3.1.4.4. strings

3.1.4.5. arrays

3.1.4.6. pointers

3.1.4.6.1. prohibit them

3.1.4.6.2. marshall and unmarshall pointed data structure

3.1.4.7. via XML

3.1.4.7.1. XSD

3.1.4.7.2. SOAP

3.2. Time

3.2.1. definititions

3.2.1.1. formula for software timestamps

3.2.1.1.1. Ci(t)= a*Hi(t) + b

3.2.1.1.2. Hi is hardware clock

3.2.1.1.3. Ci is software clock / timestamp

3.2.1.2. skew

3.2.1.2.1. difference between two clocks

3.2.1.3. clock drift rate

3.2.1.3.1. difference per unit of time from ideal clock

3.2.1.4. external synchronization

3.2.1.4.1. via external precise time source

3.2.1.5. internal synchronization

3.2.1.5.1. between pairs / groups of computers

3.2.1.6. externally within bound D => internally within bound 2*D

3.2.2. Correctness

3.2.2.1. means: when drift rate is within bound q

3.2.2.2. error between two times is bounded

3.2.2.2.1. (1-q)(t’-t) <= H(t’)-H(t) <= (1+q)(t’-t), where t’>t

3.2.2.3. monotinicity

3.2.2.3.1. t’>t => C(t’)>C(t)

3.2.3. Synchronization Methods

3.2.3.1. Christian's method (intranet)

3.2.3.1.1. client C requests time t from server C

3.2.3.1.2. t_new = t + T_rtt/2

3.2.3.1.3. Accuracy: +/- (T_rtt/2 - min)

3.2.3.1.4. Time range: [t+min, t+T_rtt-min]

3.2.3.1.5. Problems:

3.2.3.2. Berkeley's algorithm (intranet)

3.2.3.2.1. for internal synchronization

3.2.3.2.2. master polls time from clients

3.2.3.2.3. master corrects the slaves

3.2.3.3. NTP - Network Time Protocol

3.2.3.3.1. this is how UTC is spread

3.2.3.3.2. the time is synchronized in a tree structure through the internet

3.2.3.3.3. time exchange via

3.2.3.3.4. an offset o and delay d (RTT) is estimated

3.2.3.3.5. Formulas

3.3. Distributed Execution Model

3.3.1. Scalar clocks

3.3.2. Vector clocks

3.3.2.1. strictly consistent

3.3.2.1.1. means: a -> b => C(a) -> C(b)

3.4. Failure Handling

3.4.1. Possible failures

3.4.1.1. communication failures

3.4.1.2. single parts of the whole DA can crash or have bugs

3.4.1.3. byzantine failures = arbitrary erratic behaviour

3.4.1.4. failure prone RPC-interfaces

3.4.2. Types of Testing (with respect to communication)

3.4.2.1. without communication

3.4.2.1.1. component functionality

3.4.2.2. local communication

3.4.2.2.1. communication without respect to bigger time issues

3.4.2.3. network wide communication

3.4.2.3.1. time dependencies

3.4.2.3.2. synchronous/asynchronous aspects

3.4.2.3.3. multiple clients

3.4.3. Debugging Issues

3.4.3.1. Communication

3.4.3.1.1. has to be observed and controlled

3.4.3.2. Snapshots difficult

3.4.3.2.1. no shared memory

3.4.3.2.2. clock synch

3.4.3.2.3. state of the system = state of all local systems

3.4.3.3. Nondeterminism

3.4.3.3.1. transmission time

3.4.3.3.2. message sequence

3.4.3.3.3. difficult to reproduce erratic situations

3.4.3.4. Debugger

3.4.3.4.1. Breakpoints

3.4.3.4.2. brings irregularities into DS

3.4.4. Debugging Approaches

3.4.4.1. Communication Monitoring

3.4.4.1.1. only the communication flow is observed

3.4.4.1.2. black box view

3.4.4.1.3. local component testing, e.g. unit tests, separate

3.4.4.2. Global Breakpoint

3.4.4.2.1. use of logical clocks

3.4.4.2.2. if error occurs go back via totally ordering to the last consistent state

3.5. Distributed Transactions

3.5.1. ACID-property

3.5.1.1. Atomicity

3.5.1.1.1. gets executed completely (success) or not at all, means without consequence (abort)

3.5.1.1.2. Intention List

3.5.1.1.3. New Version

3.5.1.2. Consistency

3.5.1.2.1. state must stay consistent after execution

3.5.1.3. Isolation

3.5.1.3.1. no side-effects on other transactions when running in parallel

3.5.1.3.2. looks then like execution in sequence

3.5.1.3.3. Serialization of transactions

3.5.1.3.4. Locking

3.5.1.3.5. Optimistic concurrency control

3.5.1.4. Durability

3.5.1.4.1. changes have to stay = persistent

3.5.2. 2-Phase commit protocol (2PC)

3.5.2.1. 1. Should a transaction take place?

3.5.2.2. 2. Let servers vote!

3.5.2.3. 3. C makes survey (CanCommit?)

3.5.2.4. 4. If one server says no, abortion signal sent to all others

3.5.2.5. 5. If all say yes, commit signal sent to all

3.5.2.6. 6. ACKs back to C

3.5.2.7. Problems

3.5.2.7.1. one server crashes

3.5.2.7.2. C crashes

3.5.3. Distributed Deadlock

3.5.3.1. Solving via centrized deadlock server

3.5.3.1.1. communication takes time

3.5.3.1.2. single point of failure

3.5.3.1.3. -> bad idea

3.5.3.2. Edge Chasing

3.5.3.2.1. initiation

3.5.3.2.2. detection

3.5.3.2.3. resolution

3.6. Group Communication

3.6.1. Issues

3.6.1.1. Group Communication

3.6.1.2. Group Membership

3.6.2. Group addressing

3.6.2.1. Centralized

3.6.2.2. Decentralized

3.6.3. Classification

3.6.3.1. Closed vs. open group

3.6.3.1.1. Open: messages to all members of this group can be sent from outside

3.6.3.1.2. Closed: group cannot be seen from the outside and so not be addressed as a whole

3.6.3.2. Flat vs. hierarchical

3.6.3.2.1. Flat = peer group

3.6.4. Group Management

3.6.4.1. Operations

3.6.4.1.1. Get existing group names

3.6.4.1.2. Create/delete

3.6.4.1.3. Join/leave

3.6.4.1.4. Read/modify group attibutes

3.6.4.1.5. Read member info

3.6.4.2. Architecture

3.6.4.2.1. centralized

3.6.4.2.2. decentralized

3.6.4.2.3. hybrid

3.6.5. Message delivery

3.6.5.1. Atomicity Semantics

3.6.5.1.1. exactly-once

3.6.5.1.2. all-or-nothing

3.6.5.2. Ordering for messsage delivery

3.6.5.2.1. synchronously

3.6.5.2.2. loosely synchronous

3.6.5.2.3. total ordering by sequencer

3.6.5.2.4. virtually synchronous

3.6.5.2.5. sync-ordering

3.6.5.3. Taxonomy of multicast

3.6.5.3.1. unreliable

3.6.5.3.2. reliable

3.6.5.3.3. serialized

3.6.5.3.4. atomic

3.6.5.3.5. atomic, serialized

3.6.6. ISIS

3.6.6.1. abcast-protocol

3.6.6.1.1. = atomic broadcast

3.6.6.1.2. totally ordered

3.6.6.2. cbcast-protocol

3.6.6.2.1. = causally broadcast

3.6.6.2.2. uses vector clock

3.6.7. JGroups

3.7. Distributed Consensus

3.7.1. Consensus Problem

3.7.1.1. when solver has all values it processes majority algorithm

3.7.1.2. assumption: communication is reliable, but processes may fail

3.7.1.3. Problems

3.7.1.3.1. crashes

3.7.1.3.2. byzantine failures

3.7.2. Byzantine-Generals problem

3.7.2.1. difference to consensus problem

3.7.2.1.1. all process choose the value that the general chooses

3.7.2.1.2. values are forwarded

3.7.2.2. properties

3.7.2.2.1. termination

3.7.2.2.2. agreement

3.7.2.2.3. integrity

3.7.3. Interactive Consistency Problem

3.7.3.1. process have to agree on a vector of values

3.7.3.2. properties

3.7.3.2.1. termination

3.7.3.2.2. agreement

3.7.3.2.3. integrity

3.8. Kerberos

3.8.1. Roles

3.8.1.1. Client C

3.8.1.2. Server S

3.8.1.3. KDC - Key Distribution Center

3.8.1.4. TGS - Ticket Granting Service

3.8.2. Security Objects

3.8.2.1. TGS ticket

3.8.2.1.1. generated from KDC for C

3.8.2.1.2. TGS ticket contains unencrypted session key

3.8.2.1.3. message with TGS ticket also contains with password encrypted session key

3.8.2.1.4. C sends ticket to TGS

3.8.2.2. Authentifier

3.8.2.2.1. another ticket generated from TGS

3.8.2.2.2. for authentication of C at S needed

3.8.2.2.3. only requested when C wants to access S

3.8.2.3. Session key

3.8.2.3.1. for communication between S and C

3.8.3. Authentication

3.8.3.1. C sends username to KDS

3.8.3.2. C receives TGS ticket and a session key, which is encrypted with his password

3.8.3.3. C decrypts session key

3.8.3.4. decrypted key = TGS key => password correct

3.8.3.5. C sends TGS-ticket + S to TGS

3.8.3.6. C receives a server ticket from TGS

3.8.3.7. C authenticates at S with server ticket

3.8.3.8. C starts communicating with S

4. 6. Web Services

4.1. SOA

4.1.1. Characteristics

4.1.1.1. composition possible

4.1.1.2. location transparent

4.1.1.3. services communicate with each other

4.1.1.4. while developing focus on interface

4.1.1.5. self contained

4.1.1.6. well defined

4.1.1.7. holds/manages its own data

4.1.1.8. "completely"

4.1.1.9. no dependency of states of other services

4.1.2. Soa vs. component-based architecture

4.1.2.1. tight vs. loose integration

4.1.2.2. code vs. process oriented development

4.1.2.3. complex vs. interoperable architecture

4.1.2.4. build to last vs. build to change

4.1.3. Layered approach

4.1.3.1. mapping of processes to services

4.1.4. Pros & Contras

4.1.4.1. Pros

4.1.4.1.1. interoperability between DAs

4.1.4.1.2. better exchange/access of data

4.1.4.1.3. good when external services have high availability

4.1.4.1.4. exchange between organisations

4.1.4.1.5. less maintenance problems

4.1.4.1.6. good integration into existing systems

4.1.4.2. Contras

4.1.4.2.1. data-sources differ in format & semantics

4.1.4.2.2. security problems because of network issues

4.1.4.2.3. standards still have to grow

4.1.4.2.4. you need enough knowledge

4.1.5. Definition

4.1.5.1. Described via XML

4.1.5.1.1. Interface descritions

4.1.5.2. Registered and searchable in registries

4.1.5.3. can be anywhere in the network

4.2. Architecture

4.2.1. Interoperability stack

4.2.1.1. picture

4.2.2. Roles

4.2.2.1. Provider

4.2.2.1.1. publishes itself to registry

4.2.2.2. Discovery Agency / Registry

4.2.2.2.1. stores WSDLs about services from servers

4.2.2.3. Requestor

4.2.2.3.1. finds service

4.2.2.3.2. interacts with provider

4.2.3. Technologies

4.2.3.1. SOAP

4.2.3.2. WSDL

4.2.3.2.1. webservice descriptions deposited from servers

4.2.3.3. UDDI

4.2.4. Ways of message exchange

4.2.4.1. peer-to-peer

4.2.4.1.1. client provides also services to the requested server

4.2.4.2. intermediary

4.2.4.2.1. server in the middle for special actions like routing, proxy stuff etc.

4.2.4.3. direct interaction

4.2.4.3.1. no registry available, client has registry

4.3. SOAP

4.3.1. SOAP message

4.3.1.1. envelope

4.3.1.1.1. parent element for everything

4.3.1.1.2. for XML namespaces

4.3.1.2. header

4.3.1.2.1. encoding rules (don't seem to be mandatory)

4.3.1.3. body

4.3.1.3.1. procedure call

4.3.1.3.2. method name

4.3.1.3.3. parameters/results

4.4. WSDL

4.4.1. Stub/Skeleton

4.4.2. WSDL-Schema

4.4.2.1. types

4.4.2.1.1. Anmerkungen

4.4.2.1.2. schema

4.4.2.2. message

4.4.2.2.1. Anmerkungen

4.4.2.2.2. name

4.4.2.2.3. part

4.4.2.3. portType

4.4.2.3.1. Anmerkungen

4.4.2.3.2. name

4.4.2.3.3. operation

4.4.2.4. binding

4.4.2.4.1. name

4.4.2.4.2. type

4.4.2.4.3. soap:binding

4.4.2.4.4. operation

4.4.2.5. service

4.4.2.5.1. name

4.4.2.5.2. port

4.4.3. Bad practices

4.4.3.1. bad names and comments

4.4.3.2. bind port type to specific protocol

4.4.3.3. unrelated operations not into one port type

4.4.3.4. overload output messages

4.5. UDDI

4.5.1. registry service for febservices

4.5.2. "LDAP" for webservices

4.5.3. Registry System

4.5.3.1. White Pages

4.5.3.1.1. lists basic information about companies

4.5.3.1.2. company name

4.5.3.1.3. contact information

4.5.3.1.4. services this organization provides

4.5.3.2. Yellow Pages

4.5.3.2.1. listing of Webservices from companies

4.5.3.2.2. organized by business / kind of services

4.5.3.3. Green Pages

4.5.3.3.1. technical information about the webservices (WSDLs)

4.5.4. Entities

4.5.4.1. BusinessEntity

4.5.4.1.1. owner of a Web Service

4.5.4.1.2. Attributes: name, unique key, zero or more services, descriptions

4.5.4.1.3. BusinessService

4.6. Web Service Composition

4.6.1. Orchestration

4.6.1.1. transparent chaining

4.6.1.1.1. client gets webservice from UDDI registry

4.6.1.1.2. client requests one service after each other

4.6.1.1.3. makes everything himself

4.6.1.2. translucent chaining

4.6.1.2.1. client lets one service make all requests

4.6.1.2.2. client gets all the responses in the right order

4.6.1.3. opaque chaining

4.6.1.3.1. client gives away all tasks to another server

4.6.1.3.2. i.e. request and respone to/from server

5. 7. Design of DAs

5.1. Design Steps

5.1.1. 1. identify repositories of application data

5.1.2. 2. data from modules = attributes from models

5.1.3. 3. module interface

5.1.4. 4. network interface

5.1.5. 5. classification as client or server

5.1.6. 6. definition of server-registration methods

5.1.7. 7. developing binding strategies for client to servers

5.2. MDA - Model Driven Architecture

5.2.1. MDA Concept

5.2.1.1. 1. PIM - plattform independent model

5.2.1.1.1. use UML diagrams

5.2.1.1.2. class and component specification

5.2.1.1.3. abstract, no implementation

5.2.1.2. 2. PSM - plattform specific model

5.2.1.2.1. transform idea plattform specific

5.2.1.2.2. still not implemented

5.2.1.3. 3. code generation, development, test

5.2.1.3.1. concrete implementation of the classes

6. 8. Distributed File Service

6.1. Terms

6.1.1. Distributed File System

6.1.1.1. Collection of files

6.1.1.2. storedon different computers within a network

6.1.1.3. seenfrom outside as one file system

6.1.2. Distributed File Service

6.1.2.1. set of services from a DFS

6.1.3. Allocation

6.1.3.1. placement of files within the DFS

6.1.4. Relocation

6.1.5. Replication

6.1.5.1. placing of copies on several computers

6.2. Replicas

6.2.1. Motivation

6.2.1.1. parallel processing of requests

6.2.1.2. higher availability

6.2.1.3. faster response times

6.2.1.4. Less network traffic

6.2.2. Consistency

6.2.2.1. Internal Consistency

6.2.2.1.1. internal copies are consistent

6.2.2.1.2. through e.g. 2-phase-commit protocol

6.2.2.2. Mutual Consistency

6.2.2.2.1. Strict

6.2.2.2.2. Loose

6.2.3. Placement

6.2.3.1. Permanent

6.2.3.1.1. decided in advance

6.2.3.1.2. backup/mirroring

6.2.3.2. Server-initiated

6.2.3.2.1. mainly for performance reasons

6.2.3.2.2. near of client

6.2.3.3. Client-initiated

6.2.3.3.1. mainly for caching reasons

6.2.3.3.2. only for limited time

6.3. Layers of File Service

6.3.1. picture

6.4. Update of Replicated Files

6.4.1. Optimistic Concurrency Control

6.4.1.1. no constraints to users

6.4.1.2. no guaranteed consistent data

6.4.1.3. take always best available copy

6.4.1.4. parallel reading access possible

6.4.1.5. locking when writing only of one file, not its replicas

6.4.2. Pessimistic Concurrency Control

6.4.2.1. always consistent data

6.4.2.2. multiple update

6.4.2.2.1. voting

6.4.2.2.2. non-voting

7. 9. Distributed Shared Memory

7.1. Consistency

7.1.1. Write-invalidate

7.1.1.1. first multicast to block the replicas for access

7.1.1.2. ACK that update can take place

7.1.1.3. update takes place

7.1.1.4. block is removed

7.1.2. Write-update

7.1.2.1. updates made locally

7.1.2.2. all other replicas are informed about update via multicast

7.1.2.3. replicas update themselves

7.2. Tuple Space

7.2.1. Operations

7.2.1.1. in(tuple)

7.2.1.1.1. reads tuple and deletes it from tuple space

7.2.1.2. out(tuple)

7.2.1.2.1. save tuple

7.2.1.3. read(tuple)

7.2.1.3.1. same as in() but without deletion

7.2.1.4. eval(t)

7.2.1.4.1. generation of new processes

7.2.1.5. asynchronous: inp(), outp(), readp()

7.2.2. Kinds of tuple spaces

7.2.2.1. central tuple space

7.2.2.2. replicated tuple space

7.2.2.2.1. every machine has complete tuple space copy

7.2.2.3. distributed tuple space

7.2.2.3.1. every machine has parts of the tuple space

7.3. Object Space (Java)

7.3.1. see Tuple Space

7.3.2. the same as tuples but with Java-objects