• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/12

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

12 Cards in this Set

  • Front
  • Back

Filesystem:


Persistent


Hierarchic name space


Api with CRUD


Sharing data with access control


Concurrent access


Mountable filestores

File attribute record structure


File length


Creation timestamp


Read timestamp


Write timestamp


Attribute timestamp


Reference count


----------------------------


Owner


File type


Access control list

File service requirements


THEF


CRCS

Model file service architecture


Directory service(Lookup, AddName, UnName, GetNames)


Flat file service(CRWD, GetAttributes, SetAttributes)

SunNFS


Support for: THEF


Limited Support for: CRCS



Architecture:


Client: App programs -> VFS -> UNIXFS, Other file Systems, NFS Client


Server: App program -> VFS -> NFS Server, NFS Client, UNIXFS

NFS implementation doesn't need to run at System Kernel level:


- Coz can run at application server level


Advantages of UNIX implementation:


- dont need to recompile, shared cache, can access i-nodes and file blocks, security and encryption key


NFS access control and auth


- Stateless server


- userId, groupId


- Kerberos encryption


Mounting


- mount(remotehost, remotedirectory, localdirectory)


- <IP Address, port, file handle>


Automounter


- Empty mount


- table of mount points


- simple form of replication


- keeps mount table small



Kerberos


- used in mount service


- checks UserId, GroupId


Problems:


- cant have multiple users


- all remote stores must be remounted when login

NFS optimization


- Like UNIX file caching: memory buffer, writes defered to next sync



NFSv3


- write-through: immediate


- delayed commit: commit() after close

Server caching does nothing to reduce RPC traffic between client and server



Timestamp based validity check


(T - Tc < t)v(Tmclient = Tmserver)


NFS Summary:


Access:E, Location:N, Concurrency:L, Replication:L, Failure:L, Mobility: N, Performance: G, Scaling:G

Names and identifiers:


name -> identifier/address


names preferred over identifiers


-nane services resolve names



Namespace requirements:


-management of trust


- infinite number of names


- structured


- simple, meaningful names



DNS lookup -> ARP Lookup | ResourceID -> web server->file

Recrusive(Client restriction due to security)/Non recursive navigation


Caching - previous name resolutions, validity, other server caches

DNS - internet, caching, 100ms, tld + subdivisions



algorithm: local cache, superior dns name server(another NS, IP)



DNS resource records: A, NS, CNAME, SOA, MX, TXT, PTR, HINFO



Issues: DNS names change often(bad caching), cant change structure


Directory Service: yellow pages


Discover Service: directory service + auto updated on changes + discovers client services



GNS: Cache consistency, structure might change, unique directory identifier #633(world), #599(EC), #642(America)

X500 - ISO, ITU


DUAs and DSA (Server agent, User agent)


Tree structure


Directory Information Tree(Root->France->University...


Directory Information Base (Name, Dept, University, City)

Clocktime -Dc


UTC - Dt



DTreq = DTresp



Berkley algorithm: Average out time

Clock sync in wireless networks:


Message prep -> Time spent in NIC -> Delivery time to app



Lamports algorithm:


Happens before a->b


If a->b and a's clock is 60 and b's clock is less than a's, then b's new clock is 61, next frame is 61 + whatever the difference was last time



app -> Middleware adjusts -> network

Vector clocks -> vector with:


1. VC[i] is logical clock at Pi


2. VCi[j] = k then Pi knows k events have transpired at Pj. Pi knows local time of Pj.



First, increment VCi <- VCi + 1


set m's timestamp to VCi


Pj sets its own vector by VCk[k] <- max(VCk[k] , ts(m)[k]


Centralised Mutual exclusion algorithm.


If 1 asks for 3, then give.


if 2 asks for 3, queue it on 3 (dont reply).


When 1 releases 3, give it to 2



Distributed


- Lower timestamp wins in a tie



Election algorithm:



1. Bully Algorithm


- P sends election message to everyone


- if no one responds, P is the winner


- if a higher up responds, it takes over. Bye bye P. Higher up becomes co-ordinator



Ring algorithm:


Collects nodes till target node. If a node crashes, ignore it. Eg, 0 sends to 1: [5,6,0]



Wireless network:


- step 1: 4 broadcasts to 6 and 8


- setp2: 6 broadcasts to everything it's connected to, so does 8. (build-tree phase)


etc..


-reports best node



Large scale systems:


- superpeers, each alloted certain nodes, low latency, evenly distributed


- Repulsion of superpeers if not in its group


Sequential consistency:


- if result of exec executed in same order.


- appear in sequence and order


p1: W(x)a


_________________________


p2: W(x)b

Causal consistency


-Seen by all process


-in same order



Concurrent writes - can be diff order on diff machines. i.e,read cannot be before write.



Grouping: Acq first, then Rel to read



Eventual consistency: replicated databases across WAN


Monotonic read: if Pi reads x then p2 also reads same value of x or a more recent value.


WS(x1;x2) - aka, write all the variables.



Read your writes:


- effect of write on X will be seen by successive read on X



W(x1) -> R(x2)



Writes follow reads:


Writes use last read/more recent version of X


aka WS(x1;x2)



Replica server placement:


cell size for placement



Conten replication:


Permanent replica, Server-initiated replica, Client-initiated replica, Clients.



Remote write: primary server item takes all brunt


Local write: Client2 server takes all the brunt(primary migrates to process wanting to perform update)



Quorum based: correct, conflict, ROWA