Resource Lib Metrics

Get Started. It's Free
or sign up with your email address
Resource Lib Metrics by Mind Map: Resource Lib Metrics

1. User Performance

1.1. URL Coverage

1.1.1. % of manually sent answers in Doc

1.1.1.1. Parameters

1.1.1.1.1. Count of manual ans in docs (All resources)

1.1.1.1.2. Total manual ans sent

1.1.1.2. Equation

1.1.1.2.1. Count of manual ans in docs (All resources)/Total manual ans sent

1.1.1.3. Tracking

1.1.1.3.1. Count of manual ans in docs (All resources) (Not snippet)

1.1.1.3.2. Total manual ans sent

1.2. Intent Lib Addition

1.2.1. % of frequently answered snippets added to intent lib

1.2.1.1. Parameters

1.2.1.1.1. Resource Lib pred ans tally

1.2.1.1.2. Ans added to intent lib from resource lib when suggested

1.2.1.2. Equation

1.2.1.3. Tracking

1.2.1.3.1. Resource Lib pred ans tally

1.2.1.3.2. Ans added to intent lib from resource lib when suggested

2. UI

2.1. Performance

2.1.1. Resource lib expected long inf time

2.1.1.1. Does speed have effect

2.1.2. Correlation between inf speed vs ans selection

2.1.2.1. Clients tend to pick intent lib ans?

2.1.2.2. Parameters

2.1.2.2.1. Agent online time

2.1.2.2.2. Intent lib pred time

2.1.2.2.3. Resource lib pred time

2.1.2.2.4. Agent response

2.1.2.3. Equation

2.1.2.3.1. This is a trend for us to analyse

2.1.2.4. Tracking

2.1.2.4.1. Agent online time

2.1.2.4.2. Intent lib pred time

2.1.2.4.3. Resource lib pred time

2.1.2.4.4. Agent response

2.2. Intuity

2.2.1. Do agents use the UI as intended

2.2.2. % of Typed ans overlapping resource snippets shown

2.2.2.1. Parameters

2.2.2.1.1. Total typed ans

2.2.2.1.2. Total typed ans overlapping with highlighted ans

2.2.2.2. Equation

2.2.2.2.1. Total types ans overlapping with highlighted ans / total typed ans %

2.2.2.3. Tracking

2.2.2.3.1. Total typed ans

2.2.2.3.2. Total typed ans overlapping with highlighted ans

3. Model

3.1. Encoders

3.1.1. Answer sent is part of snippet returned by encoder

3.1.1.1. Parameters

3.1.1.1.1. Total answers sent

3.1.1.1.2. Answers part of ES snippets

3.1.1.2. Equation

3.1.1.2.1. Ans part of ES Snippets/Total ans sent %

3.1.1.3. This metric will cover how much snippets are missed by the encoder

3.1.1.4. Tracking

3.1.1.4.1. Total answers sent

3.1.1.4.2. Answers part of ES snippets

3.2. Reader

3.2.1. % highlight used in selected snippet

3.2.1.1. Parameters

3.2.1.1.1. Total snippet ans sent

3.2.1.1.2. Total highlighted snippet ans sent

3.2.1.2. Equation

3.2.1.2.1. Total highlighted snippet ans sent/Total snippet ans sent %

3.2.1.3. This metric will cover how well the reader highlights a sentence in the snippet

3.2.1.4. Tracking

3.2.1.4.1. Total snippet ans sent

3.2.1.4.2. Intent Lib

3.2.1.4.3. Total highlighted snippet ans sent

3.2.2. % Answer in top K` predictions from reader

3.2.2.1. Parameters

3.2.2.1.1. Total snippet in ans from ES

3.2.2.1.2. Total snippets in top K` reader predictions

3.2.2.2. Equation

3.2.2.2.1. Total snippets in top K`/Total snippet in ans from ES %

3.2.2.3. This metric will cover how well the reader ranks snippets

3.2.2.4. Tracking

3.2.2.4.1. Total snippet in ans from ES

3.2.2.4.2. Total snippets in top K` reader predictions

4. Scraping

4.1. Snippet coverage

4.1.1. % of answers sent which is part of the doc but not in the snippet

4.1.1.1. Parameters

4.1.1.1.1. Total Ans Sent

4.1.1.1.2. Ans in Doc

4.1.1.2. Equation

4.1.1.2.1. Ans in Doc/Total Answers

4.1.1.3. Tracking

4.1.1.3.1. Total Manual Answers

4.1.1.3.2. Ans in Doc