OTM: Technical Considerations when Implementing FTI

20
Technical Considerations for FTI ConAgra Foods

Transcript of OTM: Technical Considerations when Implementing FTI

Page 1: OTM: Technical Considerations when Implementing FTI

Technical Considerations for FTI

ConAgra Foods

Page 2: OTM: Technical Considerations when Implementing FTI

Presentation

• Extension of a lunch conversation • Talk about decisions – not a walkthrough

Presenter
Presentation Notes
This presentation is formatted as an extended version of a conversation with a friend who, over a lunch, asked me about my experiences with FTI. My friend, who will be installing FTI and wants to know what issues should she be concerned about. I have expanded on this conversation because, normally, I don’t do PowerPoint slides for my lunchtime conversations. This will not be a detailed list of all of the installation steps or covering every configuration decision we made. It will be focused on those decisions that had the biggest impact on our FTI installation. I have included some more of the details in the notes section of the slide, with the warning that my jokes are also included in the notes.
Page 3: OTM: Technical Considerations when Implementing FTI

ConAgra Foods Overview

• ConAgra Foods started in 1919 as Nebraska Consolidated Mills

• In 1971, it was renamed ConAgra, Inc. and the company became ConAgra Foods in 2000

• ConAgra Foods is a Fortune 500 company with more than 23,000 employees, with the company’s world headquarters located in Omaha, Neb

• In fiscal year 2012, ConAgra Foods brought in $13+ billion in net sales

Page 4: OTM: Technical Considerations when Implementing FTI

Our Brands

Page 5: OTM: Technical Considerations when Implementing FTI

ConAgra Foods

OTM/FTI supports – Consumer portion of ConAgra Foods

• Annually we plan roughly 350,000 shipments / 5,500 materials

• Inbound Purchase Orders, Inbound Stock Transfer Orders, Outbound Stock Transfer Orders, Customer Sales

• Primarily includes the modes of Truckload, Intermodal, Rail, and Less than Truckload

• Contract Carriers

Presenter
Presentation Notes
First, I want to set story. We, ConAgra Foods, installed OTM/FTI for the Consumer portion of our business. What that means is the reporting needs to support 350,000 annual shipment volume. The breakdown is about 12 percent of shipments are inbound purchase orders of raw materials or packaging, 2 percent are inbound STO’s (movement of semi finished goods from one plant to another), 37% of our shipments are outbound STO’s which move the finished goods from our plants to our mixing centers and 48% of the shipments are the shipments to the customers. The remaining percentages are a mix of dunnage and returns. Also, these numbers do not include the number of Customer Pickups. (might be able to add?) Furthermore all of this activity happens with about 5,500 materials over more than 70 brands and over a combination of about 7,000 origin / destination pairs. Further, furthermore, while about 80% of the shipments are Truckload, 10% are intermodal, 7% Less than Truckload, 2% Rail, and a smattering of other modes. For those keeping score of my math at home, these percent's are on number of shipments, not the weight on the modes. Also, as you see in the last bullet point, all of these are done with contract carriers. All 200 plus of them. Yeah. So the point of these numbers is not to impress you that I personally know our business numbers inside and out. I certainly don’t – I looked them up! But to say that there is a “healthy” amount of data in ConAgra Food’s FTI.
Page 6: OTM: Technical Considerations when Implementing FTI

Legacy Reporting

Life in BF (Before FTI) – One SAP Business Warehouse table

• No optimization – Extract and dump to Excel / Access – Access databases feeding other Access

databases

Pro – End users willing to own their solution Con – High amount of manual activities

Presenter
Presentation Notes
So where did we come from before FTI. That is an important part of the story. Any story of change and transformation starts with an assessment of the before. Before FTI our primary source for Transportation reporting was one big stinking flat file housed in SAP Business Warehouse. It was created when the first reporting in Business Warehouse was brought up and, frankly, it stunk for performance. It had something like 4 indexes on it. That is it. No aggregation, no nothing. It was the kind of delivered solution that had people spit after they described what they had. As in “BW, huh, ptooie”. As a result living with this monstrosity, ways and means were created to make sure the business could work. As a result, extracts were first built and then later scheduled. Microsoft Access was used extensively with Access databases feeding other Access databases and those databases feeding Microsoft Excel reports. So then, in our Before FTI world on the good side we had environment of highly self sufficient business users, the counter to that was that there were many processes that required weekly manual intervention and we were not nearly as good as we should have been about leveraging IT tools to make life more efficient for the organization. Multiple versions of the “truth”
Page 7: OTM: Technical Considerations when Implementing FTI

To FTI or not to FTI

Needed more data for reporting than needed in OTM to ship – Fiscal Calendar – Material Hierarchy – Budget

Option 1 – move detail data to SAP Business Warehouse

Option 2 – move master data to FTI Option 3 – Combine data at different location

Presenter
Presentation Notes
Soon after I was brought onto the project (after it was well under way), we were faced with our first big decision. We were faced with this because when I looked at the already gathered requirements I knew we were going to need a lot of customization. The customization was being driven by three main items that were not in FTI. The first one was our fiscal calendar. Our fiscal year starts after the last Sunday in May and has months of 4 weeks, 4 weeks, and 5 weeks to make up a quarter. We do almost no reporting based on the actual calendar. The whole management cycle is based off of the fiscal timetable. Additionally we would need to bring in Master data into FTI that was needed for reporting, but not needed to process any shipments in OTM. The best, and biggest, example of this is our Material Master. Our Brand Hierarchy and the costs associated with brands is a key view into our data. Again, it isn’t needed for processing shipments, it is said internally that our truck’s don’t care what is on them. However, organizationally, Brands are key. And so, brand reporting is a major consideration. Lastly, we need to report lane activity (and by this a lane is a point to point combination on non customer shipments and point to zip3 for customer shipments) to compare to our annual plan which is built up from this detailed level. The budget goes hand in hand with the fiscal calendar with some of our lane standards (for example Load Factor) being a constant within the month and others, like the weight of product shipped having a weekly (1/4th or 1/5th depending on the month) of the monthly standard. Whewh. Ok. So I’m not expecting that to be a complete understanding of our reporting scope, but hopefully that was enough to sense the complexity involved. This is a problem, not of reporting’s making. It is a function of anytime you add a new and separate transactional system and still want reporting that is inclusive of data that isn’t included in the minimum required to have the subsystem execute. And there isn’t a good answer. If you buy off on moving all of the subsystem’s data into the main reporting tool (Option 1), you have an interface that is difficult to manage and a whole lot of building to do. If you move your master data into the subsystem (Option 2), you have a large task to re-structure a delivered system. Sure, as in option 3, you can combine the data at the reporting level, but that only works if the data is already summarized and matches up neatly. Which, in this case, as is often the case, it doesn’t match up that way. There are benefits of moving the detail into the main reporting system: you can leverage existing main talent from your reporting system, you have one location for all of your details, you have one system for all reporting. There are benefits of moving the master data to the sub reporting system: because you need sub-system expertise in the reporting system, you are better positioned to leverage outside resources or be able to outsource your entire support, you are moving the lesser of the data (master data vs. detail), and you have integration at the end user level to the transactional system. While I could really try to milk the drama of this decision. I think the fact that I am here, now, giving a presentation on FTI speaks to the fact that FTI was chosen as the solution. Ultimately, it was the end user experience that decided it. The end users use OTM and the integration of the reporting tool within the same environment was the deciding factor. This isn’t to lightly gloss over how close the decision was or the complete analysis of the factors that went into making this decision. In fact, it is the number one question I am asked. Did we make the right decision? I’m not 100% sure, but what I can say is that the end user experience is the integrated experience that we chose. I would have changed certain assumptions, particularly around time, effort, and staffing.
Page 8: OTM: Technical Considerations when Implementing FTI

Self Service / Ad Hoc What is it?

Perception after seeing FTI demos: – Once you plug the system in, you never need IT

again – You can join any two data points and get the data

you want Reality

– IT still has a large (initial) build time – Complexity within the data still exists and while

anyone can create a query, not everyone should – More features mean more training is needed to

utilize tool

Presenter
Presentation Notes
I’m not necessarily here to comment on the presentations of what FTI can and can not do, I’m just here to point out or lessons learned. It wasn’t until several months into, when we were discussing “ad hoc” reporting when we realized the disconnect. When I speak of “ad hoc” I am talking about the ability of end users to mix and match parts of a predefined data set and add filters to get a result and save and share that result with others. When others heard the term “ad hoc” they thought it was the ability to pick any attribute or fact and combine it with any other attribute or fact. Simply put, there isn’t a system built today that has that capability and frankly I’m not sure it ever will be built. There is always the necessity to say how two sets of data are joined together and, at least with FTI, that task is an IT build task. IT’s build time, especially around getting our master data, by this it is our calendar and our material hierarchy, joined to the existing data is a build time that we frankly could have done a better job of explaining how and why this is important. And even after that there would still be restrictions on what could be linked with what other data. We simply can’t and don’t have every type of data joined with every other piece of data. When this situation was finally on the table and explained, the need to reconcile the perception of what is “ad hoc” created a sense that we were delivering was less than what was expected. The second part is that the data is, by its very nature, complex. The statistics I started this presentation with is the lead in to talk about this, but there are other examples. The best one I can give is the difference between reporting on the ship date vs. the date that the transaction was posted. So when we voucher to a carrier it has one fiscal period that it posts in. This posting fiscal period may not be, and frequently is not, the same period that the shipment happened in. Now if you ask “how much did it cost last month in fuel”, the first thing needed to understand is “which month”. Now we went down the path of ensuring that both were reportable elements, and this added to the complexity. Now we are getting to the situation where while people have the technical capability to create a query, we are left with the question of “should they”. Not just the complexity of the data we were surfacing, but the tool itself, while it has a very good user interface is a powerful tool. With this power comes the need to train and then we ask the question again, who is going to really be creating queries and how much training do they need? In the end, the “oh it is so simple, you just click here, drag here, and bamo – any question you can ask is answered” was not what we delivered. It can be difficult to imagine exactly what the new world will look like, but the sooner and the better you can eliminate perceptions different from what will really be delivered be the better off you will be.
Page 9: OTM: Technical Considerations when Implementing FTI

Quality is at the time the data is struck

Downstream reports can fix “bad” data, but at a huge cost – Address overrides – Pool Shipments – Clearing Process – Out of tolerance data

Presenter
Presentation Notes
Among our challenges are situations where the process to move the shipment are different than how they need to be looked at by management vs. how they actually happened. And while that sounds odd, it is not an uncommon reporting issue. It is also the type of situations that are best told by examples. The first example is we allowed address overrides where within our SAP order system, the customer can send a different address than is in the system as their destination. The reason for this is that it is easier for the customer to just send a new address than it is for our system to stop the order, create the new master data, and add a new location. Now, I’m the reporting guy, and for me this is a problem, I now have one “locations” with two different addresses. It is a case where to make the business process easier, we have made the data more difficult to report on. As a result we have had to build new origin and destinations. We also, on the inbound side, have different storage locations tagged to shipment. So we have both inbound and outbound situations where we need to look at the detail and not just pull the join of the address to the location. Another example is pool shipments. Pool shipments and LTL shipments work well in the system where each stage is another shipment number. And while this works great for processing, when you want to compare your LTL cost per pound to your TL cost per pound to go from say our mixing center to our customer, we need to adjust the costs of the first pool leg of the LTL shipment to be on final leg. Last example is our Clearing Process. Our clearing process works at the Material / Shipment level because this is the level that we post transaction into the general ledger. However, we want to be able to report at a more detailed level for the material / shipment and the Cost Type (as in accessorial or fuel or line haul). As a result, we need to calculate the clearing amount to a detailed level within FTI. Now, could the clearing process have been built at the lower level? Yes. Is the eventual answer that we need to do so? Yes. But the problem is that even if we change the process within the next quarter, we still have the history that we need to deal with, so once the process happens with the “bad” process the problem doesn’t really go away. One last note is that in the “old world” of Access reporting the end users had full capabilities to fix data if it was out of tolerance. An example of this is a shipment that comes across with say 800,000 lbs. Now we really didn’t put a truck on the road with that, but the master data indicated that that was the weight. Now the reason for this was that on the shipping side there was an error in how the process was handled. These data elements that are incorrect point to gaps in the processing side. As I said, when the end users had the capability to just eliminate a suspicious piece of data, these errors were never really fixed. Now in our tightly integrated world, there was the need to go back and look on the OTM side to fix the data so that it would flow into FTI properly.
Page 10: OTM: Technical Considerations when Implementing FTI

What is the goal of FTI?

Executive Summary – stable data – pre-summarized

Detailed Analysis – stable data – lots of detail

Direct Management and Work Lists – current data – lots of detail and complexity

Presenter
Presentation Notes
One of the other fundamental questions that it helps to sit down and draw out is to clearly what is the job role that the reporting tool will play. Just like with ad hoc reporting, no system can be all things to all people. Here are the basics of the roles that FTI can play With Executive Summary, the need for stable data is important because a report pulled in the morning where the manager looks at it and understands her gaps to plan needs to have the same numbers when her boss looks at it later that morning. We are looking to avoid any potential organizational churn if the numbers are different. Also, pre-summarized data is important as we want to utilize the overnight CPU cycles to pre-calculate the data at known summarized levels. Sure when we move onto our magic box where everything runs in less than 7 seconds off of massive data tables, we don’t need to utilize off hours aggregation. However, until that day, having pre summarized data is part of the delivered reporting system. With the work that an Analyst would do the data also needs to be stable. Having personally pulled reports that were different from two different pulls, in my case from different days, any analyst with two different sets of base data wants to poke their eyes out with a coffee stirrer. Stable data is needed because we don’t want our Analysts to blind themselves. Unlike executives want, well unlike executives should want, the Analyst needs lots and lots of data. While we do always want performance to be good for the analysts, the “I’m not sure what I’m going to be looking for” naturally implies that to some extent there will be data pulls off of large datasets with no good optimization We choose to solve for the first two which left us with the third need unsolved. With the need for direct management and work lists you need to be looking at current data. While I like sending e-mails to my boss that say “I’ve already taken care of that”, it really isn’t a good work setup to have the manager looking to find out issues that are no longer issues. A good example here would be of the work queues of the Freight Pay specialists. The specialists and their manager need to be able to look at which invoices are the oldest by specialist. The specialist needs their work list and the manager needs it summarized so that when work needs to be spread to someone else on a daily baises, the manager can shift resources as needed. This reporting is, however, difficult off of transactional systems as the queries need a great deal of detail, often from unrelated tables and the queries themselves tend to be quite complex. This is a solution that is neither good for our stable data system nor our transactional system and we continue to look for good ways to solve this issue.
Page 11: OTM: Technical Considerations when Implementing FTI

And so…

Underestimated challenges – Skills Gap to in house staff

• OTM and OBIEE skill set in the same person – Managing time to build

Presenter
Presentation Notes
With our direction set, we have identified a skill set that we needed, but the skill wasn’t in house. The combination of OBIEE and OTM in the same person is not common. Oh OBIEE. Yeah. OBIEE stands for Oracle Business Intelligence Enterprise Edition. You’ll need to forgive me. I tend to simplify things, sometimes too much, and I explain many people that FTI is a delivered set of extractors from OTM, a base set of tables in OBIEE, and a few canned – if mostly useless for us, queries. I know that that is a large simplification, but eh, it is one I often use. Now, don’t sell this understanding short. Instead of visiting my good friend “Google” and trying to look for Fusion Technologies and getting plans to make a power plant based on superheated plasma, lithium, melted lead, and 200 pistons. I type in OBIEE or “Oracle Answers” as the first part and then my question. 9 times out of 10 I get results I need. Also, as I’m fond of comforting our general user community, and for them this means the reporting tool they are using isn’t a homegrown app, it is part of an Enterprise toolset. However, and again, OBIEE and OTM skills in the same person is not common. Calling it “not common” is to undersell. If this skill was a creature it would be on the endangered species list. So we did a large amount of on the job learning. We being not just ourselves but our consulting partner as well. This learning curve cost us in terms time which we did not necessarily have built into our project timetable. While there are other comments and recommendations, if after this you remember only that getting OBIEE and OTM skills in the same people is needed, you will be a huge step up. Because I also mentioned the master data in the other slide we have, it caused us issues with frequency of refreshning. It wasn’t an apparent fact when we first started, but now as we explore speeding up our updates from OTM to FTI – we are exploring going from daily down to hourly or less, we now need to acknowledge that we have a frequency restriction based on our interfaces. Just as on a project you are only as fast as your critical path, we can only update FTI based on our least frequent interface.
Page 12: OTM: Technical Considerations when Implementing FTI

What is our FTI environment?

1 App Server: Memory 16gb, 2 CPUs 3Ghz, Redhat 5, 64 bit

1 DB Server: Memory 32gb, 8 CPUs 3Ghz, Oracle 11202-4

OBIEE Version 10.1.3.4.1 OTM 6.1.6 FTI database is 550 GB

Presenter
Presentation Notes
I wanted to add at least one point talking about the specific hardware specifications. I won’t be talking too much about specifics of the hardware because, frankly, that hasn’t been in our top ten pain points. Have we been known to throw hardware at problems? Sure, but for this project, for this set of issues, and with this base configuration, hardware hasn’t been a lever we have needed to pull. Yet. I don’t have it here, but they key hardware factors for the reporting are I/O speed, followed by I/O speed, and lastly I/O speed. For our other reporting environments we are going to virtualized hard drives – in memory as it were, and, while we don’t have that next iteration planned, we know that that will be and that it represents our next big performance gain. The specs you see here are for our production box. We also have a development (dev) environment, a QA environment, and an Alt environment as well. One of the technical considerations you need to make is how you plan on testing your reporting. Overall as an organization ConAgra Foods is moving towards testing environments that are richer in test data, but as we brought this system up we had not enough transactions in the dev or QA environments to test. As a result, we update our Alt Environment with a copy of Prod on a regular, about monthly, basis. Getting good data test and testing of loading of that good data take time. Also, you will have issue with RPD development and multiple developers. Since the tool doesn’t naturally have code change and migration tools built in, you will have problems. I don’t have a good solution, I only have the statement that you will have issues.
Page 13: OTM: Technical Considerations when Implementing FTI

Subject Areas / Presentation Folders

What is a Subject Area Used for Security Remove ODBC errors Self Service

– Operational – Financial

Presenter
Presentation Notes
Within FTI we use Subject Areas which are the OBIEE Presentation Folders to accomplish a few needed roles. One is security. We have set security by subject area, especially around Rates. This security isn’t a “hard” security like row level security, but is sufficient for the requirements. One of the early on issues that we ran into is what get called “ODBC” errors. Within the out of the box FTI you can bring tables in to a query even if there is no relationship between the two tables at a database level. As you would expect, this gets an error. Specifically the end users see the “ODBC” error. We started out with a few larger subject areas and have been since then making each Subject Area smaller and tighter so that a user can know that within a subject area they can join any data elements to any other data elements and know that the back end joins have been made. Our Subject Areas are roughly broken into two buckets – Operational and Financial. The operational ones are: On Time – we, of course, define On Time differently than the delivered FTI. And as a result we actually calculate if a shipment is on time to several different characteristics when we load a table. Route Guide Compare – This is for Compliance reporting “Actuals vs. the Route Guide” or where we check to see how our carriers are accepting vs. the tenders we are giving them Load Planner – which is a combination of On Time and Route Guide organized to pull by load planner Rate Inquiry – We have Rate Inquiry – which is an answer to the difficulty we have within OTM of comparing rates. By setting up a custom export and moving the data to FTI we have solved the issue of the inability within OTM to look at and compare rates that may be “around” a lane. The Finance Subject Areas are: We have two Material and Shipment detail subject areas – One based on the date that the dollars posted and one based on the day the shipment happened. The posting is used by the financial team for reconciliation process. The shipping date is used for material and brand analysis. An Actual vs. the Budget subject area – this subject area is where operational reporting with dollars is driven from. Mode utilization, Load Factor management, and Costs vs. Budget are all driven from the Actual vs. Budget subject area. An Accessorial Details subject area And a subject area for Freight Pay team Originally all of these finance reports were in one subject area. A the key learning is here. Use the subject areas to restrict users so that they can’t create queries that don’t work. The errors cause frustration and dis-empowered our business community in their desire to do self service reporting.
Page 14: OTM: Technical Considerations when Implementing FTI

Performance

Use advanced features Less use of Materialized View (MVIEWS) Examine ETL Integrate Usage Tracking

Presenter
Presentation Notes
Ok so “advanced features”, this is almost a Catch 22. You need to have a good Oracle understanding and a good understanding of how you are using FTI in order to be able to use the advanced features. I will tell you the ones we used to help us. But, by the very nature of them I can’t tell you how to use them. You need to develop your skill set, or buy the skill set in order to be able to use it. The faster you can get to using these, the better off you are. The features we found most helpful are: Compressing tables Parallel query mechanism Histograms PGA Aggregate Target Parameter This is a true “your mileage my vary” type of advice. With the simple advice of “get better with Oracle” being almost a “duh” but is something worth calling out. As we have gone on, we have been less and less be using Materialized Views and more and more stored procedures to build tables. The primary driver is the inability to incrementally update materialized views. Our nightly ETL has ballooned up to 10 hours and has now shrunk back down. One of the items on that comes ‘delivered’ is the OTM to FTI ETL. The thing we learned is that the delivered ETL isn’t as optimized as we needed it to be. Also, you go live you need to bring up the Usage Tracking subject area. We found out that one out fairly quickly. To the database team, the FTI users look like a single user. So without the reporting based in Usage Tracking, we were initially unable to tell which user and which query were causing slowdowns.
Page 15: OTM: Technical Considerations when Implementing FTI

End user adoption and adaptation

Ad hoc vs. Dashboards Links / prompts / drilldowns

Presenter
Presentation Notes
We started with just the queries in the Ad hoc area and thought that Dashboards would be something that we would eventually work up to. However, the usefulness and the simplification to the majority of users has meant that dashboards became adopted by the users at a faster rate than we initially expected. Because we are going for a self service environment, and the dashboards are something that inherently imply that we are delivering a canned report, we thought we wouldn’t’ be using them nearly as much as we do. The point here is that the end users went naturally towards what was easiest for them to use. We didn’t, as we shouldn’t have, put up barriers but we also did not get ahead of the adoption curve and so didn’t set it up so that we would have easily managed group dashboards. In hindsight, we probably should have set that up for self service of group dashboards. One of the other features that we commonly use and probably should have been better initially at communicating is how to setup links between reports. Most of the reporting that is done starts out at a high level (overall Consumer) and then drills down to look for specific issues. By using a link on a report and a prompt on the receiving report, we build the natural drill down. It isn’t a trick so much as it is understanding that “standard” drill downs are not really adding a column and drilling down as it is opening up a very similar looking report. Since most drill downs go only about 4 layers down, there isn’t a great deal of development overhead to setting these up. We now have the end users doing this setup and it has been a great ability to enable what they are seeing.
Page 16: OTM: Technical Considerations when Implementing FTI

Training – FTI vs. Subject Area – Subject Area Subject Matter Expert

• Owned approval of changes to Subject Area • Owned documentation to other end users

Presenter
Presentation Notes
One of the other large organizational hurdles we had to cross was on Training. We needed to break out the basic “how do I move in FTI” training from Subject Area specific training. This is to say that we needed to divorce the understanding that was universal (i.e. how to save a query) vs. the specific to the Subject Area (what is exactly in the field “Order Type”? The FTI (as in how to work it) training is the responsibility of the IT team to document and deliver and the Subject Area is the responsibility of the Super User for that Subject Area. That super user owns both all of the changes that can happen to that subject area as well as owning the documentation of that Subject Area. While we are familiar with leveraging a Super User network, by driving this split in the expectation and understanding of who is responsible for what a stumbling block. It appeared as end users completing the IT portion of the training and still delivering the feedback “I’m not yet trained”. The solution to this was to makes sure that we clearly called out the split between giving basic functional knowledge and specific expertise for a subject area.
Page 17: OTM: Technical Considerations when Implementing FTI

Screen Shot

Presenter
Presentation Notes
This shows how the basic drill down goes. Here is the Consumer level on time report broken up by one of those custom tables. We look at our brands rolled into platforms. Yes, this leads to some double counting. But that is for a whole ‘nother presentation. Here we have the basic “red/green” look with the highlighted platforms which are below the standards (which are listed on the view as well). As you can see this is the primary breakdown by platform, the drop down also allows for the other main high level looks like by SCAC or by Destination
Page 18: OTM: Technical Considerations when Implementing FTI

Screen Shot

Presenter
Presentation Notes
Next is the drill down by destination, here we see the drill down of the “Frozen” platform metrics to the first level which is, in this case, by “origin”. The reason for this is this flow relates to organizationally how we have our load planners grouped. We then see that our first big issue would be at “Batesville”.
Page 19: OTM: Technical Considerations when Implementing FTI
Presenter
Presentation Notes
Last we would then see the prior week’s detail. We have here three different reports (platform, origin, detail) two have similar looks, but each time we go down to a lower level we pass the parameters and add more detail. Once we are down at this lowest level, we see that The first two issues have common destinations and a common carrier. The next step would be to look at the carrier version (from the top level) and check out to see if this is a carrier issue or if there was an issue with the destination or if there is are further issues down that would tell a different story. By using the basic “red / green” and three reports that are linked we create an easy way to get from the key metrics tot the transactions that are causing the issues.
Page 20: OTM: Technical Considerations when Implementing FTI

Thanks!

Presenter
Presentation Notes
So I hope you found this helpful. When I reviewed this internally, we spent a few hours going “oh yeah” and “yup, I remember that” and when I presented this to someone not directly involved with the project but with years of reporting experience he nodded his head many times and “sounds familiar”. So I hope I struck a good balance between the specifics of what we did in FTI with the broader “lessons learned” that can be applied elsewhere. Thanks so much for your time.