Agile Methodology In System Design

801 Words 4 Pages
System design involves the process of defining the system architecture, system components, modules and interfaces for satisfying the specified system requirements. It can be viewed as conversion of the theoretical system to actual product development. It actually involves the description of the system architecture, structure, behavior and its analysis. It involves the logical design of the proposed system that pertains to the abstract representation of the data flows, inputs and outputs of the system architecture. It involves the physical design that represents the actual inputs and outputs of the system architecture.

5.1 Methodology used for System Development
Software Development Life Cycle (SDLC) is the process of transforming the feasible
…show more content…
For implementing this project, agile methodology has been used at Happiest Minds Technologies.
Agile development methodology follows a linear chronological move towards providing flexibility for varying project requirements, as they occur. The key features of agile methodology are:-
1. Agile methodology segregates the project development life into sprints where each sprint may span into couple of days or months, but not into years
2. The Agile methodology can be measured as a compilation of many different projects, which are nothing but the iterations of the different phases focusing on extending the overall software quality with opinions and feedbacks from users or the QA team
3. The Agile methodology, on allows for changes to be made in the project development requirements even after the initial planning has been completed
4. An iterative development approach is used as a result, planning, development, prototyping and other software development phases can appear more than once during the entire
…show more content…
When the object classification application is launched from the master node, it creates “SparkContext” object. This spark context object is responsible for communicating to other worker nodes. The spark context object creates the RDDs on the datasets and passes them to worker node. In the spam dataset each file is treated as a RDD and processed by each worker node. Each worker node receives the RDD, from the RDD information it fetches the corresponding file from the HDFS system and process it i.e. perform Pre-Processing, Feature

Related Documents