Enter the experiment.
I made a decision over the weekend to present not inform. I needed Skip to expertise the brand new know-how for himself — not simply hear me speak about it. What if we may make a marketing strategy to launch a hypothetical new farm in underneath 40 minutes? I may do it utilizing a ChatGPT 4 with the Superior Information Evaluation enabled, over display screen share to his iPhone — we’d collaborate collectively to construct it.
We briefly mentioned our assumptions:
- We have now 1,000 acres to farm in East Texas
- We have now $1M in begin up capital to buy tools with
- We have to safe a line of credit score from a financial institution on yr 1 working expense to get began
- We’re thinking about rising Wheat and Soybeans (what Skip really farmed again within the day)
We began easy:
Immediate 1: “Write me an overview for a marketing strategy to start out a brand new farm in East Texas to develop wheat and soybeans”.
Skip was instantly impressed with the lengthy kind outcomes. However wait, we have been simply getting began…
Immediate 2: “Give me an inventory of apparatus I have to get began”
Immediate 3: “Put it in a desk and prioritize numerically primarily based on what’s most urgently wanted to least pressing”
Immediate 4: “now create a brand new column for funds allocation, Assume I’m going use a mix of leases and purchases, create an extra column to suggest lease vs buy, assume I’ve $1M in beginning capital.”
(Time elapsed 4 minutes)
End result:
Immediate 5: “How a lot ought to I put aside to ask for a line of credit score for variable price (seed, fertilizer, labor, and many others.), assume 1000 acres. Create a brand new desk.”
End result:
By this level, I felt like Skip has simply witnessed Apollo 11 another time. He may barely comprise his pleasure. However we nonetheless weren’t achieved…
Immediate 6: “Write me a Monte Carlo simulation that I can regulate my enter assumptions and check the end result on my my funds”
Outcomes:
import numpy as np
import matplotlib.pyplot as plt# Outline variety of simulations
n_simulations = 10000
# Assumptions with [mean, std deviation]
wheat_seed_cost = [20, 2] # Price per acre
soybean_seed_cost = [50, 5]
fertilizer_wheat_cost = [50, 5]
fertilizer_soybean_cost = [40, 4]
labor_cost = [25, 3]
pesticide_cost = [30, 3]
fuel_maintenance_cost = [15, 2]
miscellaneous_cost = [10, 1]
def simulate_farm_costs():
wheat_seed = np.random.regular(wheat_seed_cost[0], wheat_seed_cost[1])
soybean_seed = np.random.regular(soybean_seed_cost[0], soybean_seed_cost[1])
fertilizer_wheat = np.random.regular(fertilizer_wheat_cost[0], fertilizer_wheat_cost[1])
fertilizer_soybean = np.random.regular(fertilizer_soybean_cost[0], fertilizer_soybean_cost[1])
labor = np.random.regular(labor_cost[0], labor_cost[1])
pesticide = np.random.regular(pesticide_cost[0], pesticide_cost[1])
fuel_maintenance = np.random.regular(fuel_maintenance_cost[0], fuel_maintenance_cost[1])
miscellaneous = np.random.regular(miscellaneous_cost[0], miscellaneous_cost[1])
total_cost = (wheat_seed + soybean_seed + fertilizer_wheat + fertilizer_soybean +
labor + pesticide + fuel_maintenance + miscellaneous) * 1000 # for 1000 acres
return total_cost
outcomes = [simulate_farm_costs() for _ in range(n_simulations)]
# Visualization
plt.hist(outcomes, bins=50, edgecolor="black", alpha=0.7)
plt.title('Distribution of Whole Farm Prices')
plt.xlabel('Whole Price')
plt.ylabel('Frequency')
plt.grid(True, alpha=0.2)
plt.present()
Now for the second of reality — will the code run? We hopped over to Jupyter pocket book to run the code. And there it’s on the primary strive:
By this level, we have been solely 10–quarter-hour in. Nonetheless have a while to spare…Might we make it interactive?
We have to create an inventory of parameters that enable for the person to dynamically regulate the assumptions on a slider. Once more we flip the ChatGPT for some suggestions of what these parameters have to be primarily based on the Monte Carlo we constructed earlier:
As soon as we now have our record of parameters constructed, we create a ‘measures’ desk in Energy BI tied to 16 slicer visuals that enable the person to manually choose their inputs and dynamically replace the Monte Carlo simulation. To do that we create a ‘Python Visible’ in Energy BI, drag all of our measures in, after which replace the code as follows:
# The next code to create a dataframe and take away duplicated rows is all the time executed and acts as a preamble in your script: # dataset = pandas.DataFrame(fertilizer_soybean_cost_avg Worth, fertilizer_soybean_cost_std Worth, fertilizer_wheat_cost_avg Worth, fertilizer_wheat_cost_std Worth, fuel_maintenance_cost_avg Worth, fuel_maintenance_cost_std Worth, labor_cost_avg Worth, labor_cost_std Worth, miscellaneous_cost_avg Worth, miscellaneous_cost_std Worth, pesticide_cost_avg Worth, pesticide_cost_std Worth, soybean_seed_cost_avg Worth, wheat_seed_cost_avg Worth, wheat_seed_cost_std Worth)
# dataset = dataset.drop_duplicates()
# Paste or sort your script code right here:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Assuming the info from Energy BI is handed as 'dataset'
df = dataset
# Fetch values from the dataset
wheat_seed_cost_avg = df['wheat_seed_cost_avg Value'].iloc[0]
wheat_seed_cost_std = df['wheat_seed_cost_std Value'].iloc[0]
soybean_seed_cost_avg = df['soybean_seed_cost_avg Value'].iloc[0]
soybean_seed_cost_std = df['soybean_seed_cost_std Value'].iloc[0]
fertilizer_wheat_cost_avg = df['fertilizer_wheat_cost_avg Value'].iloc[0]
fertilizer_wheat_cost_std = df['fertilizer_wheat_cost_std Value'].iloc[0]
fertilizer_soybean_cost_avg = df['fertilizer_soybean_cost_avg Value'].iloc[0]
fertilizer_soybean_cost_std = df['fertilizer_soybean_cost_std Value'].iloc[0]
labor_cost_avg = df['labor_cost_avg Value'].iloc[0]
labor_cost_std = df['labor_cost_std Value'].iloc[0]
pesticide_cost_avg = df['pesticide_cost_avg Value'].iloc[0]
pesticide_cost_std = df['pesticide_cost_std Value'].iloc[0]
fuel_maintenance_cost_avg = df['fuel_maintenance_cost_avg Value'].iloc[0]
fuel_maintenance_cost_std = df['fuel_maintenance_cost_std Value'].iloc[0]
miscellaneous_cost_avg = df['miscellaneous_cost_avg Value'].iloc[0]
miscellaneous_cost_std = df['miscellaneous_cost_std Value'].iloc[0]
# Outline variety of simulations
n_simulations = 10000
# Assumptions with [mean, std deviation]
wheat_seed_cost = [wheat_seed_cost_avg, wheat_seed_cost_std]
soybean_seed_cost = [soybean_seed_cost_avg, soybean_seed_cost_std]
fertilizer_wheat_cost = [fertilizer_wheat_cost_avg, fertilizer_wheat_cost_std]
fertilizer_soybean_cost = [fertilizer_soybean_cost_avg, fertilizer_soybean_cost_std]
labor_cost = [labor_cost_avg, labor_cost_std]
pesticide_cost = [pesticide_cost_avg, pesticide_cost_std]
fuel_maintenance_cost = [fuel_maintenance_cost_avg, fuel_maintenance_cost_std]
miscellaneous_cost = [miscellaneous_cost_avg, miscellaneous_cost_std]
def simulate_farm_costs():
wheat_seed = np.random.regular(wheat_seed_cost[0], wheat_seed_cost[1])
soybean_seed = np.random.regular(soybean_seed_cost[0], soybean_seed_cost[1])
fertilizer_wheat = np.random.regular(fertilizer_wheat_cost[0], fertilizer_wheat_cost[1])
fertilizer_soybean = np.random.regular(fertilizer_soybean_cost[0], fertilizer_soybean_cost[1])
labor = np.random.regular(labor_cost[0], labor_cost[1])
pesticide = np.random.regular(pesticide_cost[0], pesticide_cost[1])
fuel_maintenance = np.random.regular(fuel_maintenance_cost[0], fuel_maintenance_cost[1])
miscellaneous = np.random.regular(miscellaneous_cost[0], miscellaneous_cost[1])
total_cost = (wheat_seed + soybean_seed + fertilizer_wheat + fertilizer_soybean +
labor + pesticide + fuel_maintenance + miscellaneous) * 1000 # for 1000 acres
return total_cost
outcomes = [simulate_farm_costs() for _ in range(n_simulations)]
# Convert outcomes right into a dataframe
df_simulated_results = pd.DataFrame(outcomes, columns=['Total Cost'])
# Calculate the Interquartile Vary (IQR)
Q1 = df_simulated_results['Total Cost'].quantile(0.25)
Q3 = df_simulated_results['Total Cost'].quantile(0.75)
# Plotting the histogram
plt.determine(figsize=(10, 6))
n, bins, patches = plt.hist(df_simulated_results['Total Cost'], bins=50, coloration="blue", edgecolor="black", alpha=0.7)
plt.title('Distribution of Yr 1 Variable Farm Prices from Simulation')
plt.xlabel('Yr 1 Variable Price')
plt.ylabel('Frequency')
plt.grid(True, which="each", linestyle="--", linewidth=0.5)
# Shade the IQR
for i in vary(len(bins)):
if bins[i] > Q1 and bins[i] < Q3:
patches[i].set_facecolor('inexperienced')
plt.axvline(Q1, coloration="pink", linestyle="dashed", linewidth=1)
plt.axvline(Q3, coloration="pink", linestyle="dashed", linewidth=1)
plt.tight_layout()
plt.savefig('simulated_costs_histogram.png') # This can save the determine as a picture file
plt.present()
Only for enjoyable, we prompted ChatGPT to outline the Inter-Quartile Vary (IQR) and shade it a unique coloration, we additionally manually up to date the chart labels and x-axis. The remainder of simply cleansing up the visuals a bit in Energy BI to make it just a little extra person pleasant. The tip end result:
Now we now have a dynamic Monte Carlo simulation to mess around with completely different enter price assumptions and undertaking out the variable working expense we would want to get our farming operation off the bottom. Leveraging ChatGPT 4, we wrote nearly no code, simply tweaked it a bit, did a lot of the work from an iPhone over display screen share, constructed the final piece in PBI desktop and display screen shared it over the PBI iPhone app. All in about 30–40 minutes of labor.
My grandfather’s verdict? “We achieved in 40 minutes what would have taken him 2 years to do on his personal ‘again within the day’.” Sure, I acknowledge there’s much more we may do — and it “simulation” is much from good. (For instance, we don’t delineate between % of crops dedicated to soybeans vs wheat.) However for 40 minutes? Even I used to be impressed. And that is the promise of Gen AI — democratizing information science, encouraging experimentation, and dashing up the power to develop all inside the palm of your hand. Permitting a grandfather and his grandson a possibility to reconnect over some statistics and leveraging know-how in new and surprising methods.