Every Fundraising Team Should Do This!
I have had three conversations with fundraisers from three different organizations I support in the past few months. After each in-person conversation, I asked the question, “Do you have an app on your phone that enables you to record a voice memo about this conversation and then automatically send it to someone on the fundraising team to put it in your CRM?”
Not surprisingly, I got three “no” responses. And three of three fundraisers then said some version of, “That would be great!”
One of the most powerful ways to use AI today is innovation through computer software coding. Each small innovation can save time, increase revenue, or improve service. Some do more than one. In the past, you needed thousands of dollars of a professional coder’s time or someone on staff with a computer science background. Now you just need AI.
This morning, I created a solution for the “donor voice memo” problem in 3 minutes using Claude Code.
Here was the prompt I entered:
<role> You are a computer programmer with specific experience writing for the Microsoft 365 suite and Microsoft Copilot. </role>
<context> I have nonprofit fundraisers who need to document donor visits securely each time they visit a donor by creating a voice memo on their phone. These voice memos should be able to be summarized by AI and sent automatically and securely to a fundraising team assistant who will find the donor in Microsoft Dynamics, enter the summary as a CRM note, and create a list of action items needed for donor follow-up. </context>
<action> Write computer code that will enable the fundraiser to push a button on their phone, record a voice memo securely, and securely send the memo to a specific team member. </action>
<instructions> Require the fundraiser to login to Microsoft 365 the first time they use the app, and keep them logged in in the future. Create a single button when the app is launched that starts recording a voice memo and then stops recording when the same button is pushed again. When a recording is stopped, prompt the user to either send the memo or discard it. If the user chooses to send, use Microsoft Copilot AI to organize and summarize the memo and then send it to the team member already chosen to send the memo to. The summary should include a section with all relevant information from the voice memo and a section with action items. Include a menu burger that includes the following options: 1. a setup screen that allows the fundraiser to choose a team member in Microsoft Teams or Microsoft Sharepoint to whom to send the memo. 2. the ability to re-login to Microsoft 365 if they are logged out. </instructions>
<format> Output the code and step-by-step instructions on how to implement it behind a Microsoft Office 365 firewall. </format>
And below is the response, complete with 2 possible solutions, all of the code, explicit instructions on how to implement it, costs, security best practices, and maintenance instructions! I haven’t tested to see if it works, but I believe it would take about a day or so to figure it out plus a little time to test it with a fundraiser or two.
You could do the same thing with a different CRM and/or Google Enterprise workspace. My point is that it no longer takes an expert to innovate — just a few people who are willing to experiment and the blessing of leadership to do it.
See below for the full, amazing Claude output. Mind you, I have the paid version, but it’s just $20/month. If you’re a nonprofit on the Claude Teams version, you get a 75% discount that comes to about $6.25/month per employee.
Donor Visit Voice Memo App - Implementation Guide
I'll provide you with two implementation approaches: a Power Apps solution (recommended for nonprofits) and a React Native solution (for a custom native app). The Power Apps approach is more practical for most nonprofits as it requires no app store deployment and integrates natively with M365.
Solution 1: Power Apps + Power Automate (Recommended)
Architecture Overview
Power Apps: Mobile interface with recording capability
Power Automate: Workflow automation and AI processing
Azure Cognitive Services: Speech-to-text transcription
Microsoft Graph API: Teams/SharePoint integration
Microsoft 365: Authentication and security
Step 1: Prerequisites Setup
Required Licenses:
Microsoft 365 Business Standard or higher
Power Apps per user license (or included in M365)
Azure subscription (for Cognitive Services - has free tier)
Required Permissions:
Power Apps environment access
Power Automate premium connector access
Azure AD app registration permissions
Step 2: Azure Cognitive Services Setup
Login to Azure Portal (portal.azure.com)
Create Speech Service Resource:
- Click "Create a resource"- Search for "Speech"- Select "Speech" service- Create new resource group: "DonorMemoApp-RG"- Region: Choose closest to your users- Pricing: Free F0 tier (5 hours/month) or S0- Click "Review + Create"Get API Key:
Go to resource → Keys and Endpoint
Copy Key 1 and Endpoint URL
Save these securely (you'll need them)
Step 3: Register Azure AD Application
Go to Azure AD (portal.azure.com → Azure Active Directory)
App Registration:
- Click "App registrations" → "New registration"- Name: "Donor Memo App"- Supported account types: "Accounts in this organizational directory only"- Redirect URI: Leave blank for now- Click "Register"Configure API Permissions:
- Click "API permissions" → "Add a permission"- Microsoft Graph → Delegated permissions- Add these permissions: * User.Read * Files.ReadWrite * Chat.ReadWrite * TeamMember.Read.All- Click "Grant admin consent" (requires admin)Save Application Details:
Copy Application (client) ID
Copy Directory (tenant) ID
Step 4: Create Power Automate Flow
Create a new Power Automate flow file named ProcessDonorMemo.json:
{
"name": "Process Donor Voice Memo",
"description": "Transcribes and summarizes donor visit memos",
"trigger": {
"type": "manual",
"inputs": {
"audioFileContent": "string",
"audioFileName": "string",
"fundraiserEmail": "string",
"recipientEmail": "string"
}
},
"actions": [
{
"name": "TranscribeAudio",
"type": "Http",
"inputs": {
"method": "POST",
"uri": "https://YOUR_REGION.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions",
"headers": {
"Ocp-Apim-Subscription-Key": "@{variables('SpeechAPIKey')}",
"Content-Type": "application/json"
},
"body": {
"contentUrls": ["@{body('UploadToSharePoint')?['Path']}"],
"locale": "en-US",
"displayName": "Donor Visit Transcription"
}
}
},
{
"name": "SummarizeWithCopilot",
"type": "Http",
"inputs": {
"method": "POST",
"uri": "https://graph.microsoft.com/v1.0/me/chats/getAllMessages",
"authentication": {
"type": "ManagedServiceIdentity"
}
}
},
{
"name": "SendToRecipient",
"type": "Office365.SendEmail",
"inputs": {
"to": "@{triggerBody()?['recipientEmail']}",
"subject": "Donor Visit Memo - @{utcNow()}",
"body": "<h2>Donor Visit Summary</h2><p>@{body('SummarizeWithCopilot')?['summary']}</p><h3>Action Items:</h3><ul>@{body('SummarizeWithCopilot')?['actionItems']}</ul>",
"importance": "Normal"
}
}
]
}
To import this flow:
Go to Power Automate (make.powerautomate.com)
Click My flows → Import → Import Package (Legacy)
Since we're building from scratch, create a New flow → Instant cloud flow
Name it "Process Donor Voice Memo"
Select trigger: Power Apps
Build the flow manually:
I'll provide the actual working Power Automate configuration:
Step 5: Create the Power Automate Flow (Detailed Steps)
Create a new flow in Power Automate with these steps:
A. Initialize Variables:
Action: Initialize variable
Name: SpeechAPIKey
Type: String
Value: [Your Azure Speech API Key]
B. Trigger Configuration:
Trigger: Power Apps (V2)
Add these inputs:
- Ask in PowerApps: AudioFile (File content)
- Ask in PowerApps: RecipientEmail (Text)
- Ask in PowerApps: FundraiserName (Text)
C. Save Audio to SharePoint:
Action: Create file
Site Address: [Your SharePoint site]
Folder Path: /Shared Documents/DonorMemos
File Name: @{utcNow()}_@{triggerBody()?['FundraiserName']}.wav
File Content: @{triggerBody()?['AudioFile']}
D. Transcribe Audio with Azure Speech:
Action: HTTP
Method: POST
URI: https://[YOUR-REGION].stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
Headers:
Ocp-Apim-Subscription-Key: @{variables('SpeechAPIKey')}
Content-Type: audio/wav
Body: @{body('Create_file')?['Content']}
E. Summarize with AI (using Azure OpenAI or GPT):
Action: HTTP
Method: POST
URI: https://api.openai.com/v1/chat/completions
Headers:
Authorization: Bearer [YOUR-OPENAI-KEY]
Content-Type: application/json
Body:
{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a fundraising assistant. Summarize donor visit memos and extract action items."
},
{
"role": "user",
"content": "Please summarize this donor visit memo and create a list of action items:\n\n@{body('HTTP')?['DisplayText']}"
}
],
"temperature": 0.3
}
F. Parse AI Response:
Action: Parse JSON
Content: @{body('HTTP_2')}
Schema:
{
"type": "object",
"properties": {
"choices": {
"type": "array",
"items": {
"type": "object",
"properties": {
"message": {
"type": "object",
"properties": {
"content": {"type": "string"}
}
}
}
}
}
}
}
G. Send via Teams or Email:
Action: Send an email (V2)
To: @{triggerBody()?['RecipientEmail']}
Subject: Donor Visit Memo - @{utcNow('yyyy-MM-dd')}
Body:
<html>
<body>
<h2>Donor Visit Summary</h2>
<p><strong>Fundraiser:</strong> @{triggerBody()?['FundraiserName']}</p>
<p><strong>Date:</strong> @{utcNow()}</p>
<hr>
<h3>Full Transcription:</h3>
<p>@{body('HTTP')?['DisplayText']}</p>
<hr>
<h3>AI Summary:</h3>
<p>@{body('Parse_JSON')?['choices'][0]['message']['content']}</p>
<hr>
<p><em>Audio file saved to SharePoint: @{body('Create_file')?['Path']}</em></p>
</body>
</html>
Importance: High
Step 6: Create the Power Apps Interface
Create a new Canvas App in Power Apps:
File: DonorMemoApp.msapp
A. Create the main screen:
Open Power Apps Studio (make.powerapps.com)
Create → Canvas app from blank → Phone layout
Name: "Donor Visit Memo Recorder"
B. Add controls to Screen1:
// Add these controls via Insert menu:
// 1. RECORD BUTTON (large, centered)
Button_Record:
Text: If(varIsRecording, "⏹ Stop Recording", "🎤 Start Recording")
OnSelect: |
If(varIsRecording,
// Stop recording
Set(varIsRecording, false);
Set(varAudioFile, RecordButton.Audio);
Set(varShowPrompt, true),
// Start recording
Set(varIsRecording, true);
Set(varShowPrompt, false)
)
Fill: If(varIsRecording, RGBA(220, 38, 38, 1), RGBA(37, 99, 235, 1))
Color: White
Height: 100
Width: Parent.Width - 40
X: 20
Y: (Parent.Height / 2) - 50
FontSize: 20
// 2. AUDIO RECORDING CONTROL (hidden but active)
Audio_Recorder:
Mic: If(varIsRecording, Microphone.Start, Microphone.Stop)
OnStop: Set(varAudioFile, Self.Audio)
Visible: false
// 3. STATUS LABEL
Label_Status:
Text: If(varIsRecording,
"🔴 Recording... (" & Text(Timer_Recording.Value, "[$-en-US]mm:ss") & ")",
If(varShowPrompt, "Recording saved", "Ready to record"))
X: 20
Y: Button_Record.Y - 60
Width: Parent.Width - 40
Align: Center
FontSize: 16
Color: If(varIsRecording, RGBA(220, 38, 38, 1), RGBA(55, 65, 81, 1))
// 4. TIMER (for recording duration)
Timer_Recording:
Duration: 3600000 // 1 hour max
Start: varIsRecording
AutoStart: false
Repeat: false
Visible: false
// 5. PROMPT CONTAINER (shown after recording)
Container_Prompt:
Visible: varShowPrompt
X: 20
Y: Parent.Height / 2 - 100
Width: Parent.Width - 40
Height: 200
// Inside container:
Label_Prompt:
Text: "What would you like to do with this recording?"
FontSize: 18
Align: Center
Button_Send:
Text: "✉️ Send Memo"
OnSelect: |
Set(varSending, true);
ProcessDonorMemo.Run(
varAudioFile,
varRecipientEmail,
User().FullName
);
Set(varSending, false);
Set(varShowPrompt, false);
Notify("Memo sent successfully!", NotificationType.Success)
Fill: RGBA(34, 197, 94, 1)
Color: White
Button_Discard:
Text: "🗑️ Discard"
OnSelect: |
Set(varAudioFile, Blank());
Set(varShowPrompt, false);
Notify("Recording discarded", NotificationType.Information)
Fill: RGBA(156, 163, 175, 1)
Color: White
// 6. HAMBURGER MENU ICON
Icon_Menu:
Icon: Icon.HamburgerMenu
OnSelect: Set(varShowMenu, !varShowMenu)
X: 20
Y: 20
Width: 40
Height: 40
Color: RGBA(55, 65, 81, 1)
// 7. MENU CONTAINER (slide-in from left)
Container_Menu:
Visible: varShowMenu
X: If(varShowMenu, 0, -300)
Y: 0
Width: 300
Height: Parent.Height
Fill: White
// Inside menu:
Label_MenuTitle:
Text: "Settings"
FontSize: 24
FontWeight: Bold
Y: 20
Button_ChooseRecipient:
Text: "👤 Choose Recipient"
OnSelect: Navigate(Screen_ChooseRecipient, ScreenTransition.Cover)
Y: 80
Button_ReLogin:
Text: "🔐 Re-login to Microsoft 365"
OnSelect: |
Set(varAuthToken, Blank());
Connection.Connect("Office 365")
Y: 160
Label_CurrentRecipient:
Text: "Current recipient: " & varRecipientEmail
Y: 240
FontSize: 12
Color: Gray
C. Add Screen2 (Choose Recipient):
Screen_ChooseRecipient:
// Back button
Icon_Back:
Icon: Icon.ChevronLeft
OnSelect: Back()
X: 20
Y: 20
Label_Title:
Text: "Select Team Member"
FontSize: 24
Y: 20
// Gallery of team members
Gallery_TeamMembers:
Items: Office365Users.SearchUser()
TemplateSize: 80
Y: 80
Height: Parent.Height - 80
// Inside gallery template:
Image_Profile:
Image: Office365Users.UserPhotoV2(ThisItem.Mail).PhotoBytes
Label_Name:
Text: ThisItem.DisplayName
Label_Email:
Text: ThisItem.Mail
Button_Select:
Text: "Select"
OnSelect: |
Set(varRecipientEmail, ThisItem.Mail);
Set(varRecipientName, ThisItem.DisplayName);
Notify("Recipient set to " & ThisItem.DisplayName, NotificationType.Success);
Back()
D. Add OnStart formula (Screen1):
OnStart: |
// Initialize variables
Set(varIsRecording, false);
Set(varShowPrompt, false);
Set(varShowMenu, false);
Set(varAudioFile, Blank());
Set(varSending, false);
// Load saved recipient
Set(varRecipientEmail,
If(IsBlank(varRecipientEmail),
"",
varRecipientEmail));
// Authenticate with Microsoft 365
If(IsBlank(varAuthToken),
Connection.Connect("Office 365 Users")
)
Step 7: Connect Power Apps to Power Automate
In Power Apps Studio:
Click Power Automate icon (lightning bolt) in left menu
Click Add flow
Select your "Process Donor Voice Memo" flow
This creates the connection:
ProcessDonorMemo.Run()
Add Office 365 Connectors:
Click Data → Add data
Search and add:
Office 365 Users
Office 365 Outlook
SharePoint
Step 8: Deployment and Security
A. Publish the App:
1. Click "File" → "Save"
2. Click "Publish"
3. Click "Publish this version"
4. Share app with fundraisers:
- File → Share
- Enter email addresses
- Set permission to "User"
B. Security Configuration:
Data Loss Prevention Policy:
- Power Platform Admin Center - Data policies → New policy - Name: "Donor Memo Security" - Add connectors to "Business" group: * Office 365 * SharePoint * Azure services - Block all "Non-business" connectorsSharePoint Permissions:
- Go to SharePoint site - Create folder: "DonorMemos" - Set permissions: * Fundraisers: Contribute * Recipient: Full Control * Break inheritance from parentAzure AD Conditional Access:
- Azure AD → Security → Conditional Access - New policy: "Donor App MFA" - Users: Fundraiser group - Cloud apps: Power Apps - Require: Multi-factor authentication
Step 9: Testing Checklist
[ ] Fundraiser can launch app without login (SSO)
[ ] Record button starts/stops recording
[ ] Timer displays during recording
[ ] Send/Discard prompt appears after recording
[ ] Audio transcribes correctly
[ ] AI summary is accurate and formatted
[ ] Email sends to correct recipient
[ ] Audio file saves to SharePoint
[ ] Menu opens with hamburger icon
[ ] Can select new recipient from Teams
[ ] Can re-authenticate if needed
Solution 2: React Native Custom App
If you need a true native mobile app, here's the alternative approach:
File Structure:
donor-memo-app/
├── src/
│ ├── components/
│ │ ├── RecordButton.tsx
│ │ ├── MenuDrawer.tsx
│ │ └── RecipientSelector.tsx
│ ├── services/
│ │ ├── AuthService.ts
│ │ ├── AudioService.ts
│ │ ├── TranscriptionService.ts
│ │ └── GraphService.ts
│ ├── screens/
│ │ ├── HomeScreen.tsx
│ │ └── SettingsScreen.tsx
│ └── App.tsx
├── package.json
└── app.json
Core Implementation Files:
package.json:
{
"name": "donor-memo-app",
"version": "1.0.0",
"dependencies": {
"react": "18.2.0",
"react-native": "0.72.0",
"@react-navigation/native": "^6.1.7",
"@react-navigation/stack": "^6.3.17",
"@react-native-async-storage/async-storage": "^1.19.0",
"@azure/msal-react-native": "^2.0.0",
"@microsoft/microsoft-graph-client": "^3.0.7",
"react-native-audio-recorder-player": "^3.6.0",
"react-native-fs": "^2.20.0",
"axios": "^1.5.0"
}
}
src/services/AuthService.ts:
import { PublicClientApplication } from '@azure/msal-react-native';
const msalConfig = {
auth: {
clientId: 'YOUR_CLIENT_ID',
authority: 'https://login.microsoftonline.com/YOUR_TENANT_ID',
redirectUri: 'msauth://com.yourorg.donormemo/auth'
}
};
class AuthService {
private pca: PublicClientApplication;
constructor() {
this.pca = new PublicClientApplication(msalConfig);
}
async initialize() {
await this.pca.init();
}
async login() {
try {
const result = await this.pca.acquireToken({
scopes: [
'User.Read',
'Files.ReadWrite',
'Chat.ReadWrite',
'Mail.Send'
]
});
return result.accessToken;
} catch (error) {
console.error('Login failed:', error);
throw error;
}
}
async getToken() {
try {
const accounts = await this.pca.getAccounts();
if (accounts.length > 0) {
const result = await this.pca.acquireTokenSilent({
scopes: ['User.Read'],
account: accounts[0]
});
return result.accessToken;
}
return await this.login();
} catch (error) {
return await this.login();
}
}
async logout() {
const accounts = await this.pca.getAccounts();
if (accounts.length > 0) {
await this.pca.removeAccount(accounts[0]);
}
}
}
export default new AuthService();
src/services/AudioService.ts:
import AudioRecorderPlayer from 'react-native-audio-recorder-player';
import RNFS from 'react-native-fs';
class AudioService {
private audioRecorderPlayer: AudioRecorderPlayer;
private recordingPath: string;
constructor() {
this.audioRecorderPlayer = new AudioRecorderPlayer();
this.recordingPath = '';
}
async startRecording(): Promise<string> {
const path = `${RNFS.DocumentDirectoryPath}/donor_memo_${Date.now()}.m4a`;
this.recordingPath = path;
await this.audioRecorderPlayer.startRecorder(path);
this.audioRecorderPlayer.addRecordBackListener((e) => {
// Update recording time if needed
console.log('Recording:', e.currentPosition);
});
return path;
}
async stopRecording(): Promise<string> {
const result = await this.audioRecorderPlayer.stopRecorder();
this.audioRecorderPlayer.removeRecordBackListener();
return this.recordingPath;
}
async getAudioBase64(filePath: string): Promise<string> {
return await RNFS.readFile(filePath, 'base64');
}
async deleteRecording(filePath: string): Promise<void> {
if (await RNFS.exists(filePath)) {
await RNFS.unlink(filePath);
}
}
}
export default new AudioService();
src/services/TranscriptionService.ts:
import axios from 'axios';
class TranscriptionService {
private apiKey: string = 'YOUR_AZURE_SPEECH_KEY';
private region: string = 'YOUR_AZURE_REGION';
async transcribe(audioBase64: string): Promise<string> {
try {
const audioBuffer = Buffer.from(audioBase64, 'base64');
const response = await axios.post(
`https://${this.region}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`,
audioBuffer,
{
headers: {
'Ocp-Apim-Subscription-Key': this.apiKey,
'Content-Type': 'audio/wav',
}
}
);
return response.data.DisplayText || '';
} catch (error) {
console.error('Transcription failed:', error);
throw error;
}
}
async summarizeWithAI(transcript: string): Promise<{ summary: string, actionItems: string[] }> {
try {
// Using Azure OpenAI or GPT-4
const response = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model: 'gpt-4',
messages: [
{
role: 'system',
content: 'You are a fundraising assistant. Summarize donor visit memos and extract action items. Format your response as JSON with keys: summary (string) and actionItems (array of strings).'
},
{
role: 'user',
content: `Please summarize this donor visit memo and create a list of action items:\n\n${transcript}`
}
],
temperature: 0.3,
response_format: { type: "json_object" }
},
{
headers: {
'Authorization': `Bearer YOUR_OPENAI_KEY`,
'Content-Type': 'application/json'
}
}
);
const result = JSON.parse(response.data.choices[0].message.content);
return result;
} catch (error) {
console.error('Summarization failed:', error);
throw error;
}
}
}
export default new TranscriptionService();
src/services/GraphService.ts:
import { Client } from '@microsoft/microsoft-graph-client';
import AuthService from './AuthService';
class GraphService {
private client: Client | null = null;
async initialize() {
const token = await AuthService.getToken();
this.client = Client.init({
authProvider: (done) => {
done(null, token);
}
});
}
async sendEmail(recipientEmail: string, subject: string, body: string, audioBase64?: string) {
if (!this.client) await this.initialize();
const message = {
subject: subject,
body: {
contentType: 'HTML',
content: body
},
toRecipients: [
{
emailAddress: {
address: recipientEmail
}
}
],
attachments: audioBase64 ? [
{
'@odata.type': '#microsoft.graph.fileAttachment',
name: `donor_memo_${Date.now()}.m4a`,
contentType: 'audio/m4a',
contentBytes: audioBase64
}
] : []
};
await this.client!.api('/me/sendMail').post({ message });
}
async searchUsers(query: string) {
if (!this.client) await this.initialize();
const users = await this.client!
.api('/users')
.filter(`startswith(displayName,'${query}') or startswith(mail,'${query}')`)
.select('id,displayName,mail,userPrincipalName')
.top(20)
.get();
return users.value;
}
}
export default new GraphService();
src/screens/HomeScreen.tsx:
import React, { useState, useEffect } from 'react';
import {
View,
Text,
TouchableOpacity,
StyleSheet,
Alert,
ActivityIndicator
} from 'react-native';
import AsyncStorage from '@react-native-async-storage/async-storage';
import AudioService from '../services/AudioService';
import TranscriptionService from '../services/TranscriptionService';
import GraphService from '../services/GraphService';
import AuthService from '../services/AuthService';
const HomeScreen = ({ navigation }) => {
const [isRecording, setIsRecording] = useState(false);
const [recordingPath, setRecordingPath] = useState('');
const [showPrompt, setShowPrompt] = useState(false);
const [isProcessing, setIsProcessing] = useState(false);
const [recipientEmail, setRecipientEmail] = useState('');
const [recordingTime, setRecordingTime] = useState(0);
useEffect(() => {
loadRecipient();
initializeAuth();
}, []);
const initializeAuth = async () => {
try {
await AuthService.initialize();
await AuthService.getToken(); // Ensure logged in
} catch (error) {
Alert.alert('Authentication Error', 'Please login to continue');
await AuthService.login();
}
};
const loadRecipient = async () => {
const saved = await AsyncStorage.getItem('recipientEmail');
if (saved) setRecipientEmail(saved);
};
const handleRecordPress = async () => {
if (isRecording) {
// Stop recording
const path = await AudioService.stopRecording();
setRecordingPath(path);
setIsRecording(false);
setShowPrompt(true);
setRecordingTime(0);
} else {
// Start recording
await AudioService.startRecording();
setIsRecording(true);
startTimer();
}
};
const startTimer = () => {
const interval = setInterval(() => {
setRecordingTime(prev => prev + 1);
}, 1000);
return () => clearInterval(interval);
};
const handleSend = async () => {
if (!recipientEmail) {
Alert.alert('No Recipient', 'Please select a recipient in settings first.');
return;
}
setIsProcessing(true);
setShowPrompt(false);
try {
// Get audio as base64
const audioBase64 = await AudioService.getAudioBase64(recordingPath);
// Transcribe
const transcript = await TranscriptionService.transcribe(audioBase64);
// Summarize with AI
const { summary, actionItems } = await TranscriptionService.summarizeWithAI(transcript);
// Format email body
const emailBody = `
<html>
<body>
<h2>Donor Visit Summary</h2>
<p><strong>Date:</strong> ${new Date().toLocaleString()}</p>
<hr>
<h3>Full Transcription:</h3>
<p>${transcript}</p>
<hr>
<h3>AI Summary:</h3>
<p>${summary}</p>
<hr>
<h3>Action Items:</h3>
<ul>
${actionItems.map(item => `<li>${item}</li>`).join('')}
</ul>
<hr>
<p><em>Audio recording attached</em></p>
</body>
</html>
`;
// Send via Microsoft Graph
await GraphService.sendEmail(
recipientEmail,
`Donor Visit Memo - ${new Date().toLocaleDateString()}`,
emailBody,
audioBase64
);
// Clean up
await AudioService.deleteRecording(recordingPath);
setRecordingPath('');
Alert.alert('Success', 'Memo sent successfully!');
} catch (error) {
console.error('Send failed:', error);
Alert.alert('Error', 'Failed to send memo. Please try again.');
} finally {
setIsProcessing(false);
}
};
const handleDiscard = async () => {
await AudioService.deleteRecording(recordingPath);
setRecordingPath('');
setShowPrompt(false);
Alert.alert('Discarded', 'Recording has been deleted');
};
const formatTime = (seconds: number) => {
const mins = Math.floor(seconds / 60);
const secs = seconds % 60;
return `${mins.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}`;
};
return (
<View style={styles.container}>
<TouchableOpacity
style={styles.menuButton}
onPress={() => navigation.openDrawer()}
>
<Text style={styles.menuIcon}>☰</Text>
</TouchableOpacity>
<View style={styles.centerContent}>
{isRecording && (
<Text style={styles.recordingStatus}>
🔴 Recording... {formatTime(recordingTime)}
</Text>
)}
<TouchableOpacity
style={[
styles.recordButton,
isRecording && styles.recordButtonActive
]}
onPress={handleRecordPress}
disabled={isProcessing}
>
<Text style={styles.recordButtonText}>
{isRecording ? '⏹ Stop Recording' : '🎤 Start Recording'}
</Text>
</TouchableOpacity>
{showPrompt && (
<View style={styles.promptContainer}>
<Text style={styles.promptText}>
What would you like to do with this recording?
</Text>
<TouchableOpacity
style={[styles.button, styles.sendButton]}
onPress={handleSend}
>
<Text style={styles.buttonText}>✉️ Send Memo</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.button, styles.discardButton]}
onPress={handleDiscard}
>
<Text style={styles.buttonText}>🗑️ Discard</Text>
</TouchableOpacity>
</View>
)}
{isProcessing && (
<View style={styles.processingContainer}>
<ActivityIndicator size="large" color="#2563eb" />
<Text style={styles.processingText}>
Processing and sending memo...
</Text>
</View>
)}
</View>
{recipientEmail && (
<Text style={styles.recipientText}>
Recipient: {recipientEmail}
</Text>
)}
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#f9fafb',
},
menuButton: {
position: 'absolute',
top: 50,
left: 20,
zIndex: 10,
},
menuIcon: {
fontSize: 30,
color: '#374151',
},
centerContent: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
paddingHorizontal: 20,
},
recordingStatus: {
fontSize: 18,
color: '#dc2626',
marginBottom: 20,
fontWeight: '600',
},
recordButton: {
backgroundColor: '#2563eb',
paddingVertical: 30,
paddingHorizontal: 40,
borderRadius: 20,
minWidth: 280,
},
recordButtonActive: {
backgroundColor: '#dc2626',
},
recordButtonText: {
color: 'white',
fontSize: 20,
fontWeight: 'bold',
textAlign: 'center',
},
promptContainer: {
marginTop: 40,
width: '100%',
alignItems: 'center',
},
promptText: {
fontSize: 18,
marginBottom: 20,
textAlign: 'center',
color: '#374151',
},
button: {
paddingVertical: 15,
paddingHorizontal: 40,
borderRadius: 10,
marginVertical: 8,
minWidth: 200,
},
sendButton: {
backgroundColor: '#22c55e',
},
discardButton: {
backgroundColor: '#9ca3af',
},
buttonText: {
color: 'white',
fontSize: 16,
fontWeight: '600',
textAlign: 'center',
},
processingContainer: {
marginTop: 40,
alignItems: 'center',
},
processingText: {
marginTop: 15,
fontSize: 16,
color: '#6b7280',
},
recipientText: {
position: 'absolute',
bottom: 30,
alignSelf: 'center',
fontSize: 14,
color: '#6b7280',
},
});
export default HomeScreen;
src/screens/SettingsScreen.tsx:
import React, { useState, useEffect } from 'react';
import {
View,
Text,
TextInput,
TouchableOpacity,
FlatList,
StyleSheet,
Alert
} from 'react-native';
import AsyncStorage from '@react-native-async-storage/async-storage';
import GraphService from '../services/GraphService';
import AuthService from '../services/AuthService';
const SettingsScreen = ({ navigation }) => {
const [searchQuery, setSearchQuery] = useState('');
const [users, setUsers] = useState([]);
const [selectedEmail, setSelectedEmail] = useState('');
useEffect(() => {
loadSelectedRecipient();
}, []);
const loadSelectedRecipient = async () => {
const saved = await AsyncStorage.getItem('recipientEmail');
if (saved) setSelectedEmail(saved);
};
const handleSearch = async () => {
if (searchQuery.length < 2) {
Alert.alert('Search Query Too Short', 'Please enter at least 2 characters');
return;
}
try {
const results = await GraphService.searchUsers(searchQuery);
setUsers(results);
} catch (error) {
Alert.alert('Search Failed', 'Could not search for users');
}
};
const handleSelectUser = async (user) => {
await AsyncStorage.setItem('recipientEmail', user.mail);
await AsyncStorage.setItem('recipientName', user.displayName);
setSelectedEmail(user.mail);
Alert.alert('Success', `Recipient set to ${user.displayName}`);
};
const handleReLogin = async () => {
try {
await AuthService.logout();
await AuthService.login();
Alert.alert('Success', 'Successfully re-authenticated');
} catch (error) {
Alert.alert('Error', 'Failed to re-authenticate');
}
};
const renderUser = ({ item }) => (
<TouchableOpacity
style={styles.userItem}
onPress={() => handleSelectUser(item)}
>
<View>
<Text style={styles.userName}>{item.displayName}</Text>
<Text style={styles.userEmail}>{item.mail}</Text>
</View>
{selectedEmail === item.mail && (
<Text style={styles.selectedBadge}>✓ Selected</Text>
)}
</TouchableOpacity>
);
return (
<View style={styles.container}>
<Text style={styles.title}>Settings</Text>
<View style={styles.section}>
<Text style={styles.sectionTitle}>Choose Recipient</Text>
<TextInput
style={styles.searchInput}
placeholder="Search for team member..."
value={searchQuery}
onChangeText={setSearchQuery}
onSubmitEditing={handleSearch}
/>
<TouchableOpacity style={styles.searchButton} onPress={handleSearch}>
<Text style={styles.buttonText}>Search</Text>
</TouchableOpacity>
<FlatList
data={users}
renderItem={renderUser}
keyExtractor={(item) => item.id}
style={styles.userList}
/>
</View>
<View style={styles.section}>
<Text style={styles.sectionTitle}>Current Recipient</Text>
<Text style={styles.currentRecipient}>
{selectedEmail || 'No recipient selected'}
</Text>
</View>
<TouchableOpacity style={styles.reLoginButton} onPress={handleReLogin}>
<Text style={styles.buttonText}>🔐 Re-login to Microsoft 365</Text>
</TouchableOpacity>
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
padding: 20,
backgroundColor: '#f9fafb',
},
title: {
fontSize: 28,
fontWeight: 'bold',
marginBottom: 30,
color: '#111827',
},
section: {
marginBottom: 30,
},
sectionTitle: {
fontSize: 18,
fontWeight: '600',
marginBottom: 15,
color: '#374151',
},
searchInput: {
borderWidth: 1,
borderColor: '#d1d5db',
borderRadius: 8,
padding: 12,
fontSize: 16,
backgroundColor: 'white',
marginBottom: 10,
},
searchButton: {
backgroundColor: '#2563eb',
padding: 12,
borderRadius: 8,
alignItems: 'center',
},
userList: {
marginTop: 15,
maxHeight: 300,
},
userItem: {
flexDirection: 'row',
justifyContent: 'space-between',
alignItems: 'center',
padding: 15,
backgroundColor: 'white',
borderRadius: 8,
marginBottom: 10,
borderWidth: 1,
borderColor: '#e5e7eb',
},
userName: {
fontSize: 16,
fontWeight: '600',
color: '#111827',
},
userEmail: {
fontSize: 14,
color: '#6b7280',
marginTop: 4,
},
selectedBadge: {
color: '#22c55e',
fontWeight: '600',
},
currentRecipient: {
fontSize: 16,
color: '#6b7280',
padding: 15,
backgroundColor: 'white',
borderRadius: 8,
},
reLoginButton: {
backgroundColor: '#2563eb',
padding: 15,
borderRadius: 8,
alignItems: 'center',
marginTop: 20,
},
buttonText: {
color: 'white',
fontSize: 16,
fontWeight: '600',
},
});
export default SettingsScreen;
src/App.tsx:
import React from 'react';
import { NavigationContainer } from '@react-navigation/native';
import { createDrawerNavigator } from '@react-navigation/drawer';
import HomeScreen from './screens/HomeScreen';
import SettingsScreen from './screens/SettingsScreen';
const Drawer = createDrawerNavigator();
export default function App() {
return (
<NavigationContainer>
<Drawer.Navigator
initialRouteName="Home"
screenOptions={{
headerShown: true,
drawerPosition: 'left',
}}
>
<Drawer.Screen
name="Home"
component={HomeScreen}
options={{ title: 'Donor Memo Recorder' }}
/>
<Drawer.Screen
name="Settings"
component={SettingsScreen}
options={{ title: 'Settings' }}
/>
</Drawer.Navigator>
</NavigationContainer>
);
}
Deployment Instructions for React Native App
iOS Deployment:
Install dependencies:
cd donor-memo-app npm install cd ios && pod install && cd ..Configure Xcode:
Open
ios/DonorMemoApp.xcworkspaceSet Team and Bundle Identifier
Enable microphone permissions in Info.plist:
<key>NSMicrophoneUsageDescription</key> <string>This app needs microphone access to record donor visit memos</string>Build and deploy:
npx react-native run-ios
Android Deployment:
Configure permissions in AndroidManifest.xml:
<uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />Build and deploy:
npx react-native run-android
Security Best Practices
Store sensitive keys securely:
Use Azure Key Vault for API keys
Never commit keys to source control
Use environment variables
Enable MFA:
Require multi-factor authentication for all users
Configure in Azure AD Conditional Access
Data encryption:
Enable encryption at rest for SharePoint
Use HTTPS for all API calls
Implement app transport security
Audit logging:
Enable Azure AD audit logs
Monitor Power Automate run history
Track SharePoint access logs
Data retention:
Set retention policies for audio files
Configure SharePoint auto-delete rules
Document GDPR compliance procedures
Maintenance and Monitoring
Monitor these metrics:
Flow success rate (Power Automate)
Transcription accuracy (test monthly)
User adoption (track active users)
Error rates (check logs weekly)
Monthly tasks:
Review security logs
Test end-to-end workflow
Check API usage and costs
Update documentation
Cost Estimates (Power Apps Solution)
Power Apps: $5-20/user/month (often included in M365)
Power Automate: Included with premium licenses
Azure Speech Services: ~$1 per hour of audio (Free tier: 5 hours/month)
Azure OpenAI/GPT: ~$0.03 per 1K tokens (estimate $0.10 per memo)
SharePoint Storage: Included in M365
Estimated cost for 100 memos/month: $15-30/month
Support and Troubleshooting
Common issues:
Recording not working: Check microphone permissions
Authentication fails: Verify Azure AD app registration
Transcription errors: Check audio quality and format
Email not sending: Verify Graph API permissions
Getting help:
Microsoft Power Apps community forums
Azure support portal
Your M365 administrator
This solution provides a secure, enterprise-grade system for your nonprofit fundraisers. The Power Apps solution is recommended for fastest deployment and easiest maintenance. Let me know if you need clarification on any steps!