Honda Insight Forum banner

OEM pack management efficiency: Not so good(?)...

4K views 29 replies 7 participants last post by  eq1 
#1 · (Edited)
Some months ago I started toying with the stock pack management a lot using the OBDIIC&C SoC reset function and generally just by paying attention to IMA info and acting on it. This was in preparation for trying to implement a lithium LTO pack at some point, without a BMS. I've kept track of various OBDIIC&C IMA parameters for years, for every trip. I log nominal state of charge, pack temp, etc., and reset amp-hour counting at the beginning of each trip and log these at the end of each trip. So I have a massive spreadsheet of all these data...

I've also calculated a few things for each trip and, over the years, have honed-in on a couple simple metrics to keep track of what's going on, what the true state of charge of my pack is or should be, etc. For example, I add net amp-hours from trip to trip, so I have a running total of how many amp-hours have coursed through my pack...

After I started doing the 'toying', I noticed something interesting when I graphed these various data. If I let the car do its own thing, efficiency looks bad. If I deliberately do some things, deliberately thwart stock pack management, efficiency is better. That's what's illustrated in the graph below... Letting the car do its own thing is the left half, about 3 1/2 months, 'toying' with stuff is the right half, also about 3 1/2 months...


Over the first half the blue cumulative count SoC curve has a steep slope, while the black nominal SoC curve is relatively flat. This demonstrates that the number of amp-hours needed to maintain the pack state of charge is continually more than 100%. For instance, if the pack were 100% efficient, the blue curve would be flat and would track the black curve - I'd put in say 500mAh, I'd pull out 500mAh, and state of charge would be at exactly the same place where it started. Less than 100% efficient ('Coulombic efficiency') is normal and expected. But by how much?The steepness of the slope of the blue curve reflects the degree to which inefficiency exists...

Inefficiency has two main parts: self discharge and throughput efficiency. The green curve attempts to capture the throughput (in)efficiency part. When we calculate state of charge we adjust it down slightly because we know that not all the current contributes to charge or discharge energy...

In a nut shell - since this is taking me longer than I had hoped - after I started messing with stuff, the blue curve stopped rising as fast and as much, and the green curve started going down. Both of these reflect better efficiency: I'm using fewer amp-hours to maintain roughly the same states of charge. Since the green curve is supposed to account for throughput efficiency, most of the gain in efficiency is probably coming from less self discharge. That's more of a guess at this point, but probably the case...

black curve - the nominal state of charge as seen on the OBDIIC&C SoC parameter

blue curve - state of charge based on net amp-hours cumulative count. For example, cumulative count amp-hours might be 8500mAh after a few week's worth of trips, whereas the nominal capacity is 6500 - so the cumulative count SoC would be 8500/6500 X 100 = 131%.

green curve - state of charge based on net amp-hours cumulative count but adjusted for throughput (in)efficiency. Throughput inefficiency is the fraction of current that gets lost on the way, the current that doesn't end up actually charging the pack or contributing to output energy. I've calculated this a few times based on longer drives and on the car's own state of charge determination. The loss is about 2% - so for every 100mAh that go through the pack, only about 98 of them do something useful...

I also graphed average pack temp - I log pack temp at the beginning and end of each trip and the white curve reflects an average of these two values. I added this curve recently, because I was wondering if maybe higher temps over the first half and cooler temps over the second were responsible for the difference. But it doesn't look like that's the case. Average pack temps start to fall at about the middle of the left half of the graph, but we don't see a change in the slopes of the blue and green curves...

The red triangles show when positive recals happened. These are junctures at which the nominal state of charge curve accurately reflects what the car thinks the state of charge is. Since one of my toying methods involves resetting the nominal state of charge with the OBDIIC&C, the black curve doesn't always reflect this 'true' nominal. For example, one of my methods involves resetting SoC high when it's really low, typically by 10 percentage points. At those junctures, the nominal state of charge will read say 75% but it's really 65%. I don't go too far in this direction, though, so the black nominal curve generally reflects the 'true nominal'... The red triangles show when indeed we're looking at true nominal values...

Each data point represents a single day. Usually there's just one trip per day, but sometimes there's no trips and a few times there's multiple trips. I've condensed the multiple trip logs into one log per day, and I've added days where there were no trips. This makes the x-axis consistent with respect to time. The labels are divided into weeks, so about a week's time per vertical grid line... The whole graph is about 7 months. The first half, up until about the end of October, reflects a period when I was letting the car do its own thing. The second half reflects a period when I was toying with the pack management...
 
See less See more
1
#2 ·
that's a good read. I: wonder if it's possible to make an intelligent device to do what you have done manually all of this time. I am interested, has MPG also gone up?
 
#3 · (Edited)
MPG hasn't gone up, I wouldn't expect it to. The significance of the 'efficiency gains' as I see it is more about pack longevity. My hunch is that the car's imbalanced use of the pack is at least partially why packs go bad sooner rather than later. But I'd have no way to prove that, or even explain it...

On 'making an intelligent device' - I'd probably have to figure out what exactly it is that I'm doing that produces the gains, plus I'd probably have to look even closer, tighter testing, to verify that the gains are indeed real... I might be missing something, I don't think I am, but maybe...

In any event, in general, we could probably program the OBDIIC&C to reset state of charge to 75% from like 70%, to avoid the hanging pos recal and extra charge that's allowed to 'top up' the pack, and that would do most of what I do. That's the biggest thing: I don't top the pack up as high and as often as the car would do itself... I think that's probably the number one cause of inefficiency: the pack loses more charge when it's charged higher, via self discharge. It also is slightly less efficienct charging 'up there' than it is down lower...
 
#4 ·
MPG hasn't gone up, I wouldn't expect it to. The significance of the 'efficiency gains' as I see it is more about pack longevity. My hunch is that the car's imbalanced use of the pack is at least partially why packs go bad sooner rather than later. But I'd have no way to prove that, or even explain it...

In any event, in general, we could probably program the OBDIIC&C to reset state of charge to 75% from like 70%, to avoid the hanging pos recal and extra current that's allowed to 'top up' the pack, and that would do most of what I do. That's the biggest thing: I don't top the pack up as high and as often as the car would do itself... I think that's probably the number one cause of inefficiency: the pack loses more charge when it's charged higher, via self discharge. It also is slightly less efficienct charging 'up there' than it is down lower...
Damn nice presentation.

Food for thought: The "top up" of the pack actually contributes to improved balance. As SoC increases, charge efficiency decreases. Lower SoC cells accept more current as charge vs. heat. This enables lower SoC cells to "catch up" with the higher ones a tiny bit with every regen or background charge.

The healthier the pack, the better the above works. IMHO, imbalance occurs and accelerates when SD and IR overcome the above mechanism.

Your method thwarts the above and may actually worsen the threshold at which pack health overcomes the "balancing" obtained by pushing to higher SoC.

Was grid charging conducted at any time during the logged data period?
 
#6 ·
I also graphed average pack temp - I log pack temp at the beginning and end of each trip and the white curve reflects an average of these two values. I added this curve recently, because I was wondering if maybe higher temps over the first half and cooler temps over the second were responsible for the difference. But it doesn't look like that's the case. Average pack temps start to fall at about the middle of the left half of the graph, but we don't see a change in the slopes of the blue and green curves...
I love your chart. Before I read the above sentence (because I was busy staring at the chart) I did see a possible correlation between changes in temperature and the slope of the blue and green lines. I've taken the liberty to mark up your beautiful chart - I hope you don't mind. (I'm sorry that I don't know how to make it appear actual size.)

Text Slope Line Plot Font


In the first section the pack temperature averages 75F... the second it drops gradually from 75F to 50F. (Does that correspond to falling autumn temperatures?) The third is pretty constant near 50F, as is the fourth. I separated the third and fourth because the black line shows a lot more deviation from center than the third, and the slope of the blue line has changed as well. (What changed in mid-January?)

I thought that 25F was a fairly large difference (it will make me grab a coat), but a quick Google for battery performance at these two temperatures did not suggest much difference. But each data point is an average of highs and lows. What would the plot of the median temperature look like?

I have been an Insight owner for a whole month and have hardly scratched the surface of the forums, and am not a statistics wonk either, so I'll stop pretending to be able to interpret this.

Looking forward to your continued analysis!
 
#16 ·
I love your chart. Before I read the above sentence (because I was busy staring at the chart) I did see a possible correlation between changes in temperature and the slope of the blue and green lines. I've taken the liberty to mark up your beautiful chart - I hope you don't mind. (I'm sorry that I don't know how to make it appear actual size.)

View attachment 70153

In the first section the pack temperature averages 75F... the second it drops gradually from 75F to 50F. (Does that correspond to falling autumn temperatures?) The third is pretty constant near 50F, as is the fourth. I separated the third and fourth because the black line shows a lot more deviation from center than the third, and the slope of the blue line has changed as well. (What changed in mid-January?)

I thought that 25F was a fairly large difference (it will make me grab a coat), but a quick Google for battery performance at these two temperatures did not suggest much difference. But each data point is an average of highs and lows. What would the plot of the median temperature look like?

I have been an Insight owner for a whole month and have hardly scratched the surface of the forums, and am not a statistics wonk either, so I'll stop pretending to be able to interpret this.

Looking forward to your continued analysis!
He's good Sean,I really enjoy his reads,although I know am at a big learning curve,,he thinks out of the box a lot,
 
#7 · (Edited)
Yeah, I kind of see/saw a bit of a possible relationship between temp and the blue curve, mainly between the first and second half of the left half of the chart, where temp is high over the first half and the blue curve perhaps looks a tad steeper, then temp declines and the curve looks a tad shallower. But it's hard to say. It should be more obvious, and it should show up just as clearly in the green curve - but I don't really see it in the green curve. There's a lot of 'noise' in these graphs so we're really mainly interested in the broad sweeps/can only get a read on those...

I don't see any relationship over the right half of the chart - unless, by that time, when the temp is consistently lower than it was over the left half of the chart, the cooler temps affect all those data points equally, and so they're all pulled down (i.e. the slopes of the blue and green curves are set on their new trajectories and remain there as long as the average temp is the same)... Really hard to say. The key to me is the transition, when temps start declining, and over that interval I don't see a strong relationship. But, we know temp has to make a difference, it must play some part in all this...

Keep in mind that the curves are going to look jagged simply based on the nature of the trips being logged and the 'stuff' I'm doing. Sometimes, most of the time, I'll start the trip with nominal state of charge at say 70% and I'll end it with state of charge at about the same level, mainly because that's what happens when I'm driving, naturally (assist and regen tend to even out based on the type of driving I'm doing). But sometimes I'll end the trip at a lower nominal state of charge - and that's what gets logged. The day-to-day variation, well, there's going to be quite a bit. But over longer stretches it will all tend to even-out. Really, we're not interested in the individual 'data points', if I graphed it we'd be interested in the trend lines, the longer term tend lines...

What I'm trying to say is, the two periods, roughly 3 1/2 months long each, left half vs. right half, is about as fine a 'resolution' as we can focus-in on. Each interval should be long enough to establish the broad impact of the two different management regimes - OEM vs. 'manual'. Other than that we can't say much - because the finer temporal variation can be caused by various things... Hope that helps more than hurts...

Oh, here's a thought. Really, when it comes down to it, there's only 3 key data points - the beginning, the middle, and the end. The beginning and the middle form the start and end points for the first management scheme: OEM. The middle and end mark the start and end of the second management scheme: MANUAL.

Over the OEM management interval, the amount of 'excess' current needed to maintain state of charge was 12,346mAh.

Over the 'MANUAL' management interval, the right half of the chart, the amount of 'excess' current needed to maintain state of charge was only 1,440mAh (roughly, both of these figures).

12,346mAh/1,440mAh=8.6 -- the OEM management required 8.6 times more current to maintain state of charge than when I did things manually... That seems really big to me...
 
#14 · (Edited)
I was just reviewing some of these threads and posts in relation to my use of the A03 BCM and what I've seen, and a post I made in the BCM versions thread. Some of it has made me wonder something... that seems really important yet that seems like a very basic concept. Given what I wrote and quote at the bottom of this post, about the "excess" charge (current, really) that the OEM management scheme seems to impose versus if I do some things manually - basically prevent high charge state charging and use low charge states - a question comes up: What happens to all that excess charge/current? What electro-chemical processes are happening - what possible chemical and physical changes - might be happening in the cells?

Think about it. If one management scheme has me charging 8.6 times more than another just to maintain the same state of charge, over a 3 1/2 month period, what's that charging (passing current) doing to the cells? It's obviously not contributing to anything useful - it doesn't lead to more power or energy at my disposal. It just 'disappears'. BUT - does it really disappear?

I'm no chemistry or battery expert, so I'm just throwing this out here... But I got this vague idea in my head.

True overcharge results in heat and possibly gasses and stuff, but I don't think that's exactly what we're dealing with here. Maybe. But either way, I'm thinking there must be side reactions or something like that, probably during both the actual, active charging and discharging, and perhaps also during self-discharging. The current goes through the cell and does something - but does something that doesn't actually contribute to charge state. On the contrary - I'm thinking it probably does something harmful, like perhaps it contributes to the pernicious 'gamma mod NiOOH' (1) rather than the good and normal beta mod. It almost seems like it has to do something harmful, almost by definition - for if it's not contributing to charge state it's either heating up the cell and/or contributing to another electro-chemical process that doesn't contribute to the useful energy the cell stores, i.e. both harmful...

In other words, I'm starting to think there's a much more direct relationship than I have imagined before between excess charging (current) - charging (current) that doesn't contribute to charge state - and potentially harmful, degrading electrochemical processes. When I say "excess charging" I'm talking about the long term, overall extra current needed to maintain charge state - but this also encompasses discharging and self-discharging as you can lose energy in those processes as well; you simply need to make it up during the charge and that's why we're calling it 'excess charge'...

Anyway, I guess that's about it, for now.

Over the OEM management interval, the amount of 'excess' current needed to maintain state of charge was 12,346mAh. Over the 'MANUAL' management interval, the right half of the chart, the amount of 'excess' current was only 1,440mAh.
12,346mAh/1,440mAh=8.6 -- OEM management required 8.6 times more current to maintain state of charge than when I did things manually. That seems really big to me.
I just noticed a glaring improvement: In that first graph above, left half, the blue curve peaks at about 275% after 3 1/2 months of pack usage; in my contemporary graph I'm not even up to 200% and it's been about 6 1/2 months - double the time, quite less charge needed to maintain charge state. Over that first 3 1/2 months I needed to charge the pack an equivalent of 2.75 times the nominal pack capacity in order to maintain charge state; now, that figure is only 2 times plus over nearly double the time span.
(1) gamma mod NiOOH, number two in a list of the three major contributors to Insight NiMH degradation posed by sser2 some years ago. Robert Huggins also mentions it (along with many other battery folks), in the chapter on NiMH and memory effect, in his book Advanced Batteries. Here's what sser2 had to say about it:

"Formation of γ-NiOOH. The normal isoform of the NiOOH in the cell is β, which is readily converted into the Ni(OH)2 during discharge. γ-NiOOH, which slowly accumulates as battery ages, is converted to Ni(OH)2 with more difficulty, and only after all the β isoform is gone. γ-NiOOH accumulates because the battery is never fully discharged in the car.
γ-NiOOH is bad in two more ways. It crystallizes with considerable amount of water, which sequesters water from the electrolyte, making the electrolyte less conductive. The consequence is increased internal resistance. When γ-NiOOH is eliminated following deep discharge, this sequestered water returns back to the electrolyte, and internal resistance decreases. Sequestration of water in γ-NiOOH dramatically increases its volume. Whereas the β-NiOOH<=>Ni(OH)2 transition is associated with only 1-2% volumetric change, the γ-NiOOH<=>Ni(OH)2 transition alters the volume by a whopping 40%. It is therefore much more destructive to the active layer of the electrode."
 
#8 ·
I'd like to point out that in a manual Insight, charge of the HV battery can vary considerably depending on the driver's behavior. My brother's Insight was consistently at 3 bars or lower, because he had a lead foot. Despite the car's best effort, he consistently ran it flat. I suggested he try shifting a little higher and easing off the pedal, and he was more consistently able to keep the battery at half (or higher) charge.

For me, I have to make an effort to run it down, or I consistently have full or full-1 bars. When I lend the car to my roommate, I often come out to a near-empty battery, so it's not the car.
 
#9 · (Edited)
Posted something in my 'background charge' thread and ran into this. Thought I'd just add a few updates/thoughts here on this OEM efficiency topic.

I've been doing this 'manual management' stuff for quite a while now. I've continued with it since the first post in this thread. I don't try to use the middle SoC range now, though; I've been using below 50%, usually around 40%. I occasionally charge to pos recal.

In general my pack seems to have gotten better - puts out full assist with less voltage sag and at lower charge states and temps. It seems to have also gotten more efficient. For instance, I manually calculate state of charge based on cumulative amp-hour count and an efficiency adjustment factor; the adjustment factor used to be around 2% but now it's only 1% (actually I think it was around 2.4% when I started). This decrease could be due to better input and output efficiency and less self-discharge... It just means that, in order to maintain charge state I need to put in 101 mAh for every 100 I take out...

Anyhow, maybe I'll add more later - I just noticed that it's been a year to the day since I started my contemporary 'efficiency' graph, kind of like the one in my first post above. It's equally divided, too, into two 6 month halves, (I did another tap UDD/grid charge 6 months ago, and the same 6 months before that)...

edit: oh wow, I just noticed a glaring improvement: In that first graph above, left half, the blue curve peaks at about 275% after 3 1/2 months of pack usage; in my contemporary graph I'm not even up to 200% and it's been about 6 1/2 months - double the time, quite less charge needed to maintain charge state. Over that first 3 1/2 months I needed to charge the pack an equivalent of 2.75 times the nominal pack capacity in order to maintain charge state; now, that figure is only 2 times over nearly double the time span.
 
#10 ·
I've been using/testing a later model BCM for the past few weeks, discussed a bit here: https://www.insightcentral.net/thre...-at-constant-speed.123899/page-2#post-1442410

Thought I'd mention here that this BCM - an A03 from something like 2005 and/or 2006 - seems to implement a lot of what I describe in this thread: it doesn't charge nearly as high and appears to concentrate usage below 50%. Its pos recal level appears to charge to only about 50% actual. At pos recal the nominal state of charge jumps directly to 81%, then 6% of the charge is bled off via 12V load -- the top end is very circumscribed, much lower and much more the same over and over again relative to the other, older BCMs I've tried.

What's more though is that, since the nominal SoC is high yet the actual is middling and low, the BCM forces you to use that low half of the charge state (i.e. in other BCMs, a low nominal state of charge will trigger compensating behavior, such as background charging and throttled assist; here the nominal is high so these triggers aren't triggered). Over the couple weeks I've been testing it, I've basically operated below 50% all the way down to near empty (a true empty, too) -- which is almost precisely what I've been doing manually. But here, this BCM seems to do it on its own, I don't have to do anything...

I don't know, it's really weird how drastically different this BCM is compared to those others. I'm not sure what to think of the top-end threshold, for instance. In some sense it's too low... I also wonder if there's something different about my cells that causes a lower top-end. I don't think there is, but maybe there were slight differences in the cells used on later Insights that called for different programming, kind of like the purported differences between Insight cells/BCMs and Civic cells/BCMs... I did try to make graphs for Civic and Insight cells at one time, and those graphs do show that the Insight cells need to reach a higher voltage to reach a given high state of charge relative to the Civic cells. But I can't be sure that my test cells were truly representative... Plus, we're talking Civic vs. Insight here, not early Insight vs. later Insight...

In any event, seeing the differences with this A03 makes me wonder whether Honda in its later Insight years started to catch on to some of the things I've mentioned earlier in this thread...
 
#24 ·
FYI, the different behavior of the A03 BCM I talked about a while back, excerpt pasted below, turned out to be the 'cell difference' thing I describe at the end. Sources say that later Insights use Civic cells, and that this A03 BCM is programmed for those cells. But I was using it with early model, 2002 Insight cells, which need to be charged to a higher voltage to hit the pos recal point. Hence, the A03 BCM was inadvertently under-charging my early model Insight cells, pos recal-ing early. This is consistent with the graph I mention below: with the A03 BCM I was seeing a pos recal at something like 168-170V at ~6 amps. At the stick level that's about 8.45V. When I identify 8.45V on my graph, that corresponds to about 75% charge state for the Civic stick, but only about 40-50% for my Insight stick...

I guess I'll post that graph:
84195




I've been using/testing a later model BCM for the past few weeks, discussed a bit here: IMA Battery will not charge on level road at constant speed

Thought I'd mention here that this BCM - an A03 from something like 2005 and/or 2006 - seems to implement a lot of what I describe in this thread: it doesn't charge nearly as high and appears to concentrate usage below 50%.... I don't know, it's really weird how drastically different this BCM is compared to those others. I'm not sure what to think of the top-end threshold, for instance. In some sense it's too low... I also wonder if there's something different about my cells that causes a lower top-end. I don't think there is, but maybe there were slight differences in the cells used on later Insights that called for different programming, kind of like the purported differences between Insight cells/BCMs and Civic cells/BCMs... I did try to make graphs for Civic and Insight cells at one time, and those graphs do show that the Insight cells need to reach a higher voltage to reach a given high state of charge relative to the Civic cells. But I can't be sure that my test cells were truly representative... Plus, we're talking Civic vs. Insight here, not early Insight vs. later Insight...
 
#11 ·
I have to admit I have difficultly follow the details but what I get out of your post is you’re developing a management that favors charge efficiency. I’ve read these type of cells are about 60% efficient but you say you’re getting 99% which is quite impressive. Also you said the A03 BCM discharged down to 1 bar and is what I saw with my vehicle at constant speed on level ground with this BCM. Your conclusion is the BCM is working as it should although it manages the battery very differently than previous versions with the goal being to increase battery life. I can certainly see why that is desirable but I’d have to say that was not what I experienced though. I’m relatively sure all g1 insights used the same battery.


Sent from my iPhone using Tapatalk
 
#12 · (Edited)
I have to admit I have difficultly following the details but what I get out of your post is you’re developing a management that favors charge efficiency. I’ve read these type of cells are about 60% efficient but you say you’re getting 99% which is quite impressive.
I don't think I'd say I'm 'developing a management'. I just stumbled into this stuff. Originally I needed to test the IMA at lower than normal voltages because the LTO cells I wanted to use would need to be used at slightly lower voltages. But since then, actually, this experimentation revived my faith in the stock cells and I haven't had the need to make the switch.

As far as efficiency goes, when people talk about quote 'efficiency' they're usually talking about overall energy efficiency. When you say I'm getting 99% efficiency that's coulombmetric efficiency or coulombic or something like that - amp-hour efficiency, basically... You can have high/good coulombic efficiency, like 99%, but when you charge the voltage is high and when you discharge the voltage sags, while voltage is the other half of the equation that makes up total energy efficiency. For example, say average voltage during charge is 164V, average voltage during discharge is 144V, while the current rate is the same, call it 6.5 amps. 164V X 6.5 amps = 1066 watts, 144V X 6.5 amps = 936 watts. 936W/1066W=87.8% efficient...

Also you said the A03 BCM discharged down to 1 bar and is what I saw with my vehicle at constant speed on level ground with this BCM.
No, what I was describing is different than what you must have seen. When I say it discharged to empty I'm looking at the OBDIIC&C and the data I collect/keep track of. The dash bar meter wasn't actually at 1 bar and never* went that low - because it's based on the nominal state of charge, and as I said, this BCM recals up to 81%. In other words, this BCM is showing a nominal of about, say, 80% when the actual state of charge is only around 50%, which means I have only around 40% of usable capacity, give or take. So, when I discharge 40%, the nominal will read 40% (80% minus 40% = 40%), yet the pack is really almost empty...

Your conclusion is the BCM is working as it should although it manages the battery very differently than previous versions with the goal being to increase battery life. I can certainly see why that is desirable but I’d have to say that was not what I experienced though. I’m relatively sure all g1 insights used the same battery.
My guess is that, yeah, the BCM is working as it was intended to, the programming is very different than my older BCMs, and it was probably programmed differently to ... probably increase battery life, or something - the changes were probably meant to circumvent what Honda thought were deficiencies in the earlier models. They had a lot of warranty issues, so that would make sense to me... But, then again, the pos recal threshold does seem pretty low. I don't know if there's anything inside the BCM that could affect voltage measurement, seems possible, possible that it's a fluke and this BCM is measuring voltage too low... I'd have to test another A03 to really know whether these differences are model-specific. But they probably are...

In your case, using this BCM, if your pack isn't up to snuff, if it's even a little out of balance, if it's crudded-up at all -- I'm pretty sure this BCM would be a nightmare. You'd have barely any usable capacity. Probably most packs out there can hardly function under 50%. Mine can because I've done ultra-deep discharges and stuff. But any of my previous working but not great packs would have barely functioned with this BCM. I'm pretty sure you just had very little usable capacity in your pack using this BCM - and it neg recal-ed and the BATT gauge dropped to 1 bar. On the other hand, you had an aftermarket pack and the voltage profile for those cells are slightly different, people have had issues, etc. So, it could be related to that as well...

*It has gone that low, when I drained the pack and got a neg recal. Just that here I'm not talking about that.
 
#13 ·
Thank you for the clarification. I should have reiterated I was using Bumble Bee sticks I removed from my 2002 vehicle that were about a year old but I see you remembered that. The Bumble Bee is rated 8.0 Ah versus the 6.5 Ah for the OEM and maybe that is the root of the problem. My 2006 has the OEM battery installed running a 2002 MCM/BCM pair. After two weeks, no issues but I think these Tennessee hills gives the battery quite a work out.
 
#15 ·
I'm re-posting here the image/graph that was posted in the first post. It's kind of pissing me off that I've revisted a couple threads of mine and I keep finding images gone, and they're images that I've uploaded to IC itself. And then you can't edit the original post to put the image back.

 
#17 · (Edited)
Lately I've been dealing with a rebuild of an old failed rebuild, plus a single stick in my in-car pack that's always had a bit faster self-discharge in at least one cell. This rebuilt rebuild ended up having quite a few fast self discharge cells too. These are making me reassess the significance, the impact, of thwarting OEM management and using the pack at very low charge states like I've been doing. 'Crud' build-up and the resulting voltage sag remains an issue, a problem that low charge state usage seems to help. But also, I think relatively fast self discharge and the imbalance that entails is also a problem that low charge state usage would help mitigate. I'm thinking that it's possible the A03 BCM program changes I mentioned above - where it forces low charge state usage - might've been implemented to address these two big, debilitating problems.

When it comes to uneven self discharge, the problem becomes a single cell or more self discharging low while the others remain high. When the car goes to charge the pack, it becomes difficult charging the low cells enough without overcharging the high cells (which the BCM won't do). But, if the pack were used at a lower charge state on a routine basis, there would always be a much larger charge window.

For example, if OEM management routinely charges the pack to a high of 80%, like most BCMs do, and a single cell discharges to zero, then the most you can charge the pack, the most usable capacity you can have, is only 20% (assuming the BCM allows extra charge in such situations, allowing charge to just under 100%). Once the pack charges 20% the high cells are full and any more charge will overcharge them. Meanwhile, the cell that dropped to zero will have only charged to 20%.

But, if OEM management routinely charges to a high of only 50%, like the A03 BCM appeared to do when I was messing with it, your charge window grows to 50%. The high cells will be at 50% charge state, the high self discharge cell drops to zero -- but you can still charge 50% before the high cells reach full and will begin to over charge. That means your high self discharge cells can charge up to 50% charge state.

All-in-all, this 'using low charge state tactic' seems like a pretty powerful way to combat two of the biggest problems for aging packs...
 
#18 ·
Lately I've been dealing with a rebuild of an old failed rebuild, plus a single stick in my in-car pack that's always had a bit faster self-discharge in at least one cell. This rebuilt rebuild ended up having quite a few fast self discharge cells too. These are making me reassess the significance, the impact, of thwarting OEM management and using the pack at very low charge states like I've been doing. 'Crud' build-up and the resulting voltage sag remains an issue, a problem that low charge state usage seems to help. But also, I think relatively fast self discharge and the imbalance that entails is also a problem that low charge state usage would help mitigate. I'm thinking that it's possible the A03 BCM program changes I mentioned above - where it forces low charge state usage - might've been implemented to address these two big, debilitating problems.

When it comes to uneven self discharge, the problem becomes a single cell or more self discharging low while the others remain high. When the car goes to charge the pack, it becomes difficult charging the low cells enough without overcharging the high cells. But, if the pack were used at a lower charge state on a routine basis, there would always be a much larger charge window.

For example, if OEM management routinely charges the pack to a high of 80%, like most BCMs do, and a single cell discharges to zero, then the most you can charge the pack, the most usable capacity you can have, is only 20%. Once the pack charges 20% the high cells are full and any more charge will overcharge them. Meanwhile, the cell that dropped to zero will have only charged to 20%.

But, if OEM management routinely charges to a high of only 50%, like the A03 BCM appeared to do when I was messing with it, your charge window grows to 50%. The high cells will be at 50% charge state, the high self discharge cell drops to zero -- but you can still charge 50% before the high cells reach full and will begin to over charge. That means your high self discharge cells can charge up to 50% charge state.

All-in-all, this 'using low charge state tactic' seems like a pretty powerful way to combat two of the biggest problems for aging packs...
I wonder if the first cell to "go bad" keeps the next weak cell from "going bad" because the first cell triggers the stop of discharge before the next weak cell can be driven into a level which damages it. I also wonder if the bottom of every discharge imparts a tiny but permanent insult to that weak cell.

I also am finding the pack rebuild, where one replaces the bad stick, may not be sufficient. I think the above is happening and when the next week cell comes up to bat, that low discharge, for it, is like taking a wild pitch in the gut.

I am wondering if out-of-pack cell-level stick conditioning is in order, but yeah, that's not something you want to do manually.
 
#20 · (Edited)
Here's a better graphic depicting the concept described above, same data. It's really quite striking and, I think, a good illustration of the potential impact of the whole 'use low charge state' idea, as well as of the benefits of deep discharging in general.

Cell C2 (yellow curve) was a relatively fast self-discharge cell, and so in the car it was perpetually used at the lowest charge state - basically always driven to empty. Cell C3 (black curve) was more or less normal - no fast self discharge, no major deterioration, etc. Since the car's battery management disables discharging (assist) when C2 is empty, all other cells never get close to empty - and so they suffer from that short cycling, frequent high charge state charging, etc. They become voltage depressed. This is also what happens in general due to the way the BCM manages the pack, regardless of having a fast self-discharge cell or not, and to the way most people drive (i.e. letting the BCM do whatever it wants, which of course is completely reasonable). C2 benefits, though, from the deeper discharge and/or the frequent, regular low charge state usage.

83679


Here's another way to think about this concept (low charge state usage/deep discharge) and the benefits. The voltage just after the start of the yellow curve discharge is about 1.25V; the voltage on the black curve underneath that point is about 1.1V, so a 0.15V difference. Now, multiply that by the 120 cells in the Insight pack: 120 X 0.15V = 18 volts. IF low charge state usage helps to preserve the higher low-charge-state voltages you get from an initial ultra-deep discharge, it's like your pack grew by an additional 18 volts! Not only that, but that additional voltage is happening at the lowest charge states, i.e. you're not getting premature neg recal, you're able to use the full pack capacity.
 
#21 ·
I'm confused, because C2 looks like it only has 600 maH of capacity remaining. Am I misunderstanding?
 
#22 ·
Right, that's what I said a couple posts up. The pack sat for about a month, C2 self discharged and ultimately only ends up with 600mAh. But because of the faster self discharge, it was also cycled low, and cycling low keeps the voltage from becoming depressed. I'm using this as an example of the ameliorative impact of cycling low. C2 itself is not a good cell - because of the self discharge. Ideally all the cells would be equal, with even self discharge, and then you'd be able to cycle all the cells low and they'd end up with nice, lofty voltage curves, just like the C2 curve, only all the way up the charge state range.
 
#23 · (Edited)
Figured I'd add a perfunctory update of sorts to the above ideas, as I've been mostly continuing with, well, testing of pretty much the whole 'low charge state usage' theory that started this thread. But most recently it's just focusing on what's going on with this one faster self discharge cell and taking low charge state usage to the ultimate extreme -- near perpetual rock-bottom pack usage... I have to say something because it just blows my mind every time -- just how different, and better, these cells work than you'd think if you had just used them as they're normally used in the car...

Most recently, I've been driving for at least a week or two with the actual state of charge for most usage no higher than 20%, but I estimate more like around 13%, with one cell, the fast self discharge cell, at zero when the pack neg recals. I've basically been resetting nominal state of charge with OBDIIC&C to 40% when pack neg recals, and pack neg recals when this fast self discharge cell is close to empty - while other cells are probably around 13%... Last night I did some tap discharging and subtracted probably about 450mAh from the 13% cells, so they're closer to the fast SD, 'zero' cell...

Suffice it to say that I've been operating the pack really low and trying to get it even lower. It should be around 6% now...

Power Output
The main thing I want to emphasize here is the raw performance capability of the OEM NiMH cells. Even at this super low charge state and at cool temps (say about 50F to 68F), I can easily pound out about 70 amps at about 120-130V. Voltage doesn't plummet, it stays pretty steady, I can get the full 4 seconds or so of full throttle assist. When I do let it charge up higher, 30 amp assist happens at no lower than 140V, I think usually around 140-144V; 20 amp assist is usually above 150V, I'm often seeing something like 150-156V. This is higher than I used to see; it used to be more like 20 amp assist at the greater than 140V level. I have to add that I don't have to charge it very high at all, though, to see this kind of performance. It takes maybe 10-15% above the neg recal point to see these metrics...

Temp
I pound on this pack and the temp change I see is no where close to what I've seen in the past with various packs. I used to think it was 'normal' for the pack to increase in temp quite noticeably after some full assist and regen events. But now, I do see a little temp increase, but it takes quite a bit of usage, for example, probably something like half a dozen full assist and braking regen events and I might see a temp increase of a couple degrees F. In the past I've seen single 'full assist' events make temp increase right away. I never see that now...

Gets rid of Voltage Depression
Another thing I noticed is that, when I do let the pack charge up, pack voltage goes up to about 168-170V fairly quickly and stays around there during most of the charge. I'm pretty sure this low charge state usage does indeed eradicate voltage depression. So, if you go back a couple posts and look at the black and yellow discharge graph I posted, one cell is depressed (black) while the other isn't (yellow). I'm pretty sure that now the depressed cell, which is in this pack, can't be depressed any longer. If I discharged it its curve would look more like the yellow curve, lofty until the end...


Anyway, the performance I'm seeing is... fascinating. I wish I had a better handle on the electro-chemistry. What's interesting is that I just never would have expected these cells could put out so much power at such low charge state. There's this weird thing going on, this incongruity or something, with the difference between charging and discharging: charging is harder, discharging is easier; charging heats the cells more than discharging. Not all chemistries are like that; it's the opposite with my LTO cells and I think my boiler-plate lipo cells, too... I'm thinking that something with this incongruity might have to do with why perpetual high charge state usage causes problems, and on the flip side, why low charge state usage would end up, well, 'fixing' things in the first instance, but then performing better and preserving the performance... It just seems really weird that the cells would have such an easy-er time pulling protons from the negative electrode and pushing them into the positive electrode (discharging) on the one hand, versus pulling them from the positive electrode and pushing them into the negative (charging)... If anyone knows why that'd be the case I'm all ears...
 
#25 ·
Just wanted to post a quick thought/idea/question, I think it's mostly related to the topic in this thread.

I'm wondering just how much 'inefficiency' is the result of energy lost as heat versus energy that goes to 'side reactions' - whatever reactions that don't contribute to useful energy output?

I've continued more or less with this whole efficiency logging/testing thing, and these days, with my various manual management techniques, the 'loss' is so much less than it used to be. In the first post there's a graph that depicts the concept. Basically, back then, after say a few months of usage, net amp-hour input would be like two times the pack capacity to maintain a given charge state; now it's like maybe 25% extra (so '2 times' = 200%, now it's maybe 25%)...

I've wondered similar before, but now it just seems more vivid, real.

A couple other things make me wonder about this - heat vs. 'side reactions'. For instance, recently I did some bench work and it struck me that it doesn't seem like 'our cells' actually lose capacity - like the capacity is all still there (in general, usually), it simply needs to be extracted at super low current. So over time/usage the cells lose the ability to support even modestly high currents, such as on discharge, so the cell effectively loses 'capacity', simply because it can't maintain the voltage that it needs to at the currents that it's supposed to. But, use really low current and I think you'll almost always 'pull out' something close to the stock rated capacity (you can genuinely lose capacity, but I think that's secondary)...

This inability to support high currents actually happens at the bottom and at the top: there seems to be a range of charge state, say between 75% and 100%, and between 0 and 25%, for which the ability to support high current depends on whether these ranges are actually used. Use them and those ranges can be very small, like instead of 75-100 (25% of absolute capacity not used or usable) it can be 95-100 (5% not used or usable), and instead of 0-25 it can be 0-5. I don't think the stock management, though, actually implements a strategy to make you use the bottom and/or top. If it does it doesn't happen very often, I don't see it...
 
#26 ·
Considering what other experts have written about 'the weaknesses' of our cells, I think I have a rudimentary conceptualization of what might be happening with this 'lost capacity/charge state' in the upper and lower ranges - an essential 'squeezing' of the capacity, an effectively shrunken cell. And I think I have a slightly... deeper insight into the cause, at a slightly finer-grain level, though it's really muddy...

Given that I seem to be able to stretch the capacity (performance) of my cells if I simply use the bottom and top, it makes me think of the 'large crystal growth' idea. Larger nickel oxide crystals can't support high currents/large loads. If you don't use say the top or bottom 25% of absolute capacity, it seems like you'd always be picking the low-hanging fruit. I've written this before.

If you have a lot of large crystals of 'active material' at hand, you can achieve a semblance of performance within the performance demands of the Insight. The crystals are large and can't handle much current on their own, but if you have enough of them they'll do the job. Reactions take place on the surface of crystals - so you're basically depending on a lot of large crystals to do the work, and that work is taking place on the surface - which begets more surface reactions on subsequent charges and discharges. The crystals grow larger. Your capacity is effectively shrinking.

BUT, if/when you actually use top and bottom, you're forcing 'the system' to use harder to react material, to reach into deeper material. This means breaking-up the crystals. I think, when you use high and low charge state, you end up with smaller crystals that can support higher and higher currents... Voltage stays higher, under higher loads, you don't hit the management lower or upper voltage thresholds as soon, temps stay lower, etc etc...

As far as those "slightly deeper insights" go, I'm just thinking this: Our cells have a range of 'stoichiometry', voltage hysteresis - the main reaction doesn't happen at only a single voltage, rather, it can happen within a range. This happens day-in and day-out. If you charge a cell, the discharge voltage just after that charge will be much higher than if you had discharged after letting the cell sit for a day or so. It's amazing I never really noticed this before, it seems so profound. How you're using the cells can impact the efficiency, the power output and input capability.

If, say, you were to charge the pack - leave it charged high when you park at night - over night you'll lose a lot of charge state. When you go to use assist, the voltage will slump a lot... I'm not sure what the range/boundaries are, but it looks like there's a low plateau at around 1.20V to 1.25V, whereas the high, seemingly more correct/normal plateau is 1.31V to 1.34V... So, let's say these are right, so the difference, all else being equal, would be like:

120 cells X 1.20 to 1.25V = 144V to 150V vs.
120 cells X 1.31 to 1.34V = 157.2V to 160.8V

So the 'slumping' lost-charge-state pack will tend to gravitate toward about 144V-150V, while the 'correct' one will gravitate toward 157V-161V... Of course, in the car you've got losses due to resistance, temp, depends on current rate, et al - so the actual voltages will be different. But the 'idealized' voltages might be these.

Let's correct for resistance. Say total resistance is 3.5mΩ per cell, x 120 cells = 0.42 Ω for the pack.
At a 20 A discharge current, that 0.42 Ω resistance will reduce the voltage by: 0.42Ω X 20A = -8.4V.

The 'slump pack' voltage at 20A would be about 138.6V; the 'correct pack' voltage would be about 150.6V...

That's actually pretty close to what I see in the car, when driving - in general... I think that's the way it's calculated.

So... Drive, leave the pack charged high, come back to slumping voltages. Charge and then discharge on the drive and the range of charge state that gets used will return to higher voltages - on that drive. I think I'm saying that leaving the pack charged high isn't a good thing. One, you'll lose that charge over night anyway, so why bother. But two, I'm thinking that something in this process actually ends up 'crudding-up' the cells - it's the difference between my '200% extra charge' over that few month period vs. the '25% extra', that I mentioned in the previous post.

It's like, what happens on the day-to-day usage time scale ends up compounding on the longer, month-to-year time scale. What happens in 'microcosm' ends up happening at 'macrocosm'... I.E The losses I can see within a 24 hour period only - if I were to use the pack one way versus another - would end up compounding over time, were I to continue to use the pack that one way versus the other.
 
#27 ·
On my short drive this morning I was musing about some concepts/definitions that have really caused me much trouble understanding 'all this'. When faced with Insight NiMH cells and how they seem to work, it becomes doubly confusing.

"Capacity" and "charge state" - two things that, really I'm not sure I understand.

With Insight cells, you can charge to a high - what I've been calling "absolute capacity level" or "absolute charge state." The cells are say 6500mAh capacity. Charge them with 6000mAh current input, and all else being equal, you've charged them to 6000/6500=92.3%, right? Ignore efficiency and all that.

That value - 92.3% - however, I don't think it's technically the same as quote "state of charge" or "charge state."

Say I charge 6000 mAh input, a single cell, the voltage goes to 1.433V at rest. Were I to do some discharge on this cell immediately, with a moderate discharge current, the voltage might fall to about 1.31V under load and, after removing the load, it'd rebound to something like 1.358V. If however I let the cell sit over night, come back next day and do the same type of discharge, the resting voltage upon my return would probably be around 1.40V, the loaded discharge voltage would slump to around 1.25V, and the rebound voltage might be around ... maybe 1.30V to 1.32V...

This is what's so confusing - because this range of potential voltages can happen anywhere within say the 5%-95% 'absolute capacity-charge state level', yet, clearly it seems, one set of voltages reflects a higher 'charge state' than the other.

So, I could charge one cell with 6000mAh input, let it sit over night, come back and end up with 1.25V discharge-loaded voltages; on the flip side, I could charge another cell with only say 3000mAh input (or way less), discharge immediately and see say 1.31V discharge-loaded voltage.

The first cell might rebound to 1.31V; the second cell might rebound to 1.37V - the first has a lower voltage, but it's charged to a much higher absolute capacity level, that 6000mAh versus only 3000mAh. Which cell has the higher "charge state"?

So basically, there's two distinct concepts here that I've all too often conflated. I'm really not sure how that's dealt with, but it seems clear that they're not the same.

I can operate my cells at what seems like a higher 'charge state window' or something like that, even though they can be at a very low absolute charge state or absolute capacity level. And the flip side is true as well: I can operate my cells at what seems like a low charge state window - even though the cells are charged to a high absolute charge state or capacity level. The 'absolute charged capacity level' or whatever doesn't seem to matter, as far as I can tell, at all. Whether the cells operate at the higher charge state window or lower depends totally on how recently you 'charged up' that window...

I think, if I really wanted to understand this, I'd have to study the whole 'oxidation-vario-stoichiometry' concept. That seems to be key, that seems to be what determines the range of potential operating voltages. I just haven't been able to penetrate the...logic of it all, the jargon, etc. It basically requires a firm understanding of some basic chemistry concepts, like oxidation-reduction reactions. I can never even remember whether a compound that's oxidized loses or gains electrons...

NiMH cells are fully discharged when only Ni(OH)2 remains (full reduction); fully charged when only NiOOH exists (fully oxidized). Both have 2 parts 'O', but one, the NiOOH, has only a single hydrogen atom. The ratio between O and H is 2/1 in the charged state - I think that's the 'oxidation state'? - the 'II' you often see associated with it? Or rather, maybe it's the ratio between the Ni and the O, 2 parts O, 1 part Ni, yeah, I think that's right... NiOOH is the 'charged' state because it 'has room' for another hydrogen atom or proton or whatever; when you discharge the cell, a proton will move from the negative electrode, an electron will move through the current collector, that hydrogen ends up at the positive electrode and the NiOOH becomes Ni(OH)2. The addition of that H+ makes the 'NiOOH' less negative, it 'loses' an electron simply by virtue of gaining a proton, a positive charge. I think it is "reduced" - being 'less negative ' is the focus of the term "reduction"...

"Vario-stoichiometry"? I'm thinking the 'text book' case or condition treats the ratio of O to Ni as being 2, 2 parts O, 1 part Ni. But, the 'vario-stoichiometry' aspect means it isn't always or necessarily that perfect ratio. Is it more O or more Ni?... I can look for that in a book I have... Or actually, let me think about this. The 'Huggins' idea, that memory effect is caused by 'HNi2O3' is a similar concept. Here the ratio of O to Ni is 3/2 - so it'd be less O (3/2 vs. 2/1). The vario-stoichiometry I think might be the range from say 3/2 to 2/1... In conjunction with this 'memory effect' idea, the 3/2 is less charged, a lower charge state. You can have a cell or electrode that's, I don't know, in a sort of equilibrium state, when the ratio of O to Ni is within this range, not just at a single value, a single ratio... Ideally, I think, it'd be that 2/1; but when 'whatever' happens, it can be less than that and the cells don't perform as well...

I think I'm getting a little closer...
 
#30 ·
I did re-read some 'Huggins', but I can't say it amounted to much. One thing I think I did clarify a bit, though, is that initial question - capacity vs. charge state:

..."Capacity" and "charge state" - two things that, really I'm not sure I understand. With Insight cells, you can charge to a high - what I've been calling "absolute capacity level" or "absolute charge state." The cells are say 6500mAh capacity. Charge them with 6000mAh current input, and all else being equal, you've charged them to 6000/6500=92.3%, right? Ignore efficiency and all that.

That value - 92.3% - however, I don't think it's technically the same as quote "state of charge" or "charge state."
According to the definition of "charge state" in the Huggins, that 92.3% value there is technically the "charge state":

"The state of charge is the present value of the fraction of the maximum capacity that is still available to be supplied."

In a nut shell, I think there's a sort of vernacular, amalgamated usage of quote "charge state" that mushes together voltage and charge, that really doesn't quibble with or parse the concepts of current, power, and energy. Yet, when you're trying to find answers you really need to do the parsing...

"Charge" needs to be thought of like "charge" in a capacitor, I think - you've just got pluses and minuses, + and -, if you've got a lot of + you've got a lot of 'charge' (of course, you've got to think of these + and - in terms of the potential difference between the positive and negative electrodes, not just strictly + and - verbatim).

If you charge an Insight cell with 6000mAh of current input - maybe you have cruddy cells and can't discharge the cell passing another 6000mAh of current on the way out, yet it's simply because the voltage drops too easily, too fast to do it within your voltage window. If you drop the current and/or lower the voltage window, lower the end voltage threshold, say from 1V to 0.8V - you'll get that full 6000mAh (again, ignore minor Coulombic inefficiency for now). The quote "charge state" when you started was indeed 6000mAh/6500mAh=92.3%, it just might not be achieved within the operating parameters that you need or use...

Voltage, on the other hand, is...just a different thing. It makes sense to think of voltage and current together as some sort of master charge state concept, but 'charge' is one thing, and voltage is another. I don't think I know the language though, still, to deal with, say, a high charge state and slumping voltages vs. a low charge state and high voltages. Maybe it's 'energy state'? Or 'energy density' or something like that.

'Despite high charge state, Insight cells can have a variable energy and power density, depending on the oxidation state of the positive electrode', or 'depending on the voltage behavior of the cell'?

'Insight cells can have a higher power output at low charge state versus high charge state, it depends on the oxidation state of the positive electrode, which is variable'?

'The power output of Insight cells can vary solely based on recent usage of the cell, where low charge state can end up putting out more power than high charge state.'

The chemical energy in the cell: the configuration of stuff in the cell is what ultimately determines its energy output. You can 'charge' the cell with a fixed amount of current, say that 6000mAh, but it doesn't necessarily end up as stored chemical energy the same way in the same amount - because the configuration of that stuff in the cell can vary. It's kind of like stacking books in a bookcase: You have a pile of books - the current - and you have a bookcase, the cell. Say the objective is twofold: you need to fit all the books in the bookcase, but you also have to order the books from heavy to light from top to bottom (I know, just the opposite of what you'd really want to do, but bear with me).

If you do it fast you just try to stuff the books in place, with the shelves arrayed as-is. It's hard to get the right order - you get all the books in place (all the current, that 6000mAh), but your organization leaves a lot of heavy books low-down. If you take your time, you see that you need to increase the space of the top shelf - make it bigger for heavier books - so you lower the top shelf. Now, when you place the books you're able to fit the big heavy books on the top shelf, and you do the same moving down the bookcase.

You've fit all the books in both scenarios, but the latter scenario, where you re-configured the shelves, you were able to produce the highest energy configuration... Insight cells can have different shelf arrangements, and some are more conducive to... I think it's high power output, not necessarily higher energy output. So, maybe the bookcase analogy isn't quite right...
 
#28 · (Edited)
Kind of just sitting here meditating on stuff above. Wondering what 'highly oxidizing conditions' are and means, what that entails for voltages.

When you continue to charge cells that are nearly full, what happens? I've read a bit here and there but never had much of an understanding from it all. IF hydrogen continues to be withdrawn from the positive electrode, the electrode becomes more negative, it becomes more 'oxidized'. The ratio of O to Ni increases. Meanwhile, the hydrogen, the H+, is going to the negative electrode (eventually). The negative electrode is becoming less negative (reduced). The voltage, therefore, is increasing - positive electrode more negative, negative electrode more positive, the potential difference is getting bigger...[edit: 5/17/22 this isn't right, if the positive electrode is becoming more negative, and the negative electrode is becoming more positive, then the potential difference, i.e. the voltage, is shrinking, not growing.]

I know I've seen something called 'the proton-deficient limit' - at some point you've extracted all the H+ you can without, I guess, physically destroying the electrode. The electrode structure, the arrangement of the nickel, oxygen, and hydrogen, can hold together well enough with only so much removal of the hydrogen... I'm wondering what the actual voltages are that we'd need to see for, say, having the oxidation state (?) go above 2/1, having the ratio of O to Ni go above 2? And then, what kind of voltages would we/should we be seeing at the 'proton-deficient limit'?

There's an old graph/diagram in the 'Huggins' book that sort of illustrates this. Maybe I'll post that here:

89424


The problem is, I could never quite understand where his arrows are pointing to, exactly, how the values here translate to real-world voltages... The flat equilibrium plateau is supposed to be between 1.327V and 1.367V. I think this means that the normal reaction is supposed to happen within this voltage range - at or near equilibrium conditions, which basically means at really low current.

In the diagram, there's a sloped line toward the top with an arrow pointing to it, with a label that reads, "only NiOOH present." It looks like that has to be happening between the top value of the normal plateau, 1.367V, and what looks like a little above 1.50V. That 1.50V kind of makes sense - our own cells peak at about 1.53V at full, at about a 6.5A current. I think it actually gets close to that at even lower currents...

1.327V to 1.367V: for a pack of 120 cells, those values would be 159.2 to 164.0. That's really quite interesting as I almost always see my pack voltage 'equilibrate' to within this range. I can charge high, for instance, and see a higher voltage, say 170V, but just a little discharge and voltage will drop to around 163V, + or - a bit... Technically, supposedly our cells have an equilibrium voltage of 1.318V, that's a Honda-reported value, plus I've seen almost exactly that in almost every cell in a given pack at one time or another, say at a middling absolute charge state...

So... the question is, what exactly is going on when I see resting voltages above, say 164V? What exactly is present that makes the voltage high? Maybe it's, like, that extra proton removal - the electrode is sort of artificially oxidized, temporarily 'pumped-up' (or really the opposite, since the H+ is removed, not 'pumped' in)... And, perhaps, that makes it especially easy for the electrode to re-equilibrate with the electrolyte: That's another thing I've read, that Ni(OH)2 is not electronically conductive, but NiOOH is. When a cell, or positive electrode, is fully charged, the Ni(OH)2 is gone, and it no longer serves as a barrier layer between the electrode and the electrolyte; it's in contact with the electrolyte. So basically, H+ can migrate into the positive electrode and the charge state is lowered, the voltage drops...

I actually think this is probably close to right, close to what goes on, why we see higher voltages upon relatively high 'absolute charge-state charge', and why they tend to drop pretty quickly... Oh but, actually, I don't think it has to be "high absolute charge-state charge"; rather, it can be anywhere - that's the impact of the vario-stoichiometry (I think). I can see high voltages at low absolute charge state; I can see low voltages at high absolute charge state - it doesn't matter, for the most part.

So now the question is, So what? What would it mean, could it mean, that you have voltages above 164V more often or less often? Does it harm the cells? Is it harm to the cells? Here I'm talking about imaginary, pseudo-voltages, not necessarily the real-world voltages you see, but rather, ones that are adjusted for non-equilibrium conditions. In other words, you can see voltages well above 164V, but most of the increment above 164V is, say, due to resistance, like having a resistor in the cables. The 'actual' voltage in the cell would be that total minus the amount that comes from the resistance...

I'll have to come back to this later.
 
#29 · (Edited)
You know, it's interesting. I read this book, parts of it, quite a long time ago, reading the most relevant parts many times. But, I haven't really been thinking about it over the past few years, while I've say done my 'low charge state usage' experimentation and such. But now, having more or less banked some of my own real-world observations, and now having revisited a bit of that book, I'm thinking, 'Jeez, what I've been seeing sounds an awful lot like what's described in the book, in this chapter on 'memory effect'.'

I think I need to re-read that stuff.

In general, as I recall, the 'Huggins' idea is that 'overcharging' causes memory effect. That's basically what's being depicted at the top of this diagram, where there's an arrow pointing to a plateau, at a little above 1.50V, with a label that reads, "NiOOH, HNi2O3, O2 triangle plateau." That refers to a 'Gibbs triangle', or something like that, a method to suss-out the potential relations/reactions of different materials.

I think the idea is that, if you keep charging a cell when it's already reached the normal oxidation state, that 2/1 ratio I guess, it can become more oxidized via reactions other than the normal one. You're no longer converting Ni(OH)2 to NiOOH; you're converting NiOOH to something akin to 'HNi2O3' (there's different notation for these things, depending on what your focus is, NiOOH in this latter notation would be HNiO2). So instead of 'HNiO2' we end up with HNi2O3. One thing I'm not getting, however, are all the steps in between: I thought we had more oxidizing conditions, yet, how do we end up with a lower 'oxidation state' - 3/2 vs. 2/1?

In any event, when you take this HNi2O3 in context, it ends up lowering the voltage, and it's so much lower that the cell becomes essentially unusable in whatever equipment you're using, which equipment requires higher, normal voltages.

The remedy is deep discharge: this substance supposedly has a plateau at about 0.78V, once you discharge the cell I guess at least that low, it's supposed to get rid of the substance - and thus the plateau and the cell's overall lower voltage.

Not sure of the precision or accuracy of the voltages, but the general concept seems in line with what I've seen in the real world.

It seems plausible to me that, in general, higher operating voltages - say prolonged, consistent usage above a true, let's call it 1.40V, just above the highest equilibrium voltage on the normal plateau - will tend to produce more 'HNi2O3', or whatever, something akin to it. And that's what ends up shrinking usable capacity, lowering performance, etc. And, I don't think the cell has to be chock-full for you to see this degradation - you can see relatively higher voltages anywhere within the 'absolute charge state' range.

Something like this would go a long way toward explaining my whole 'inefficiency' angle. In the most basic way, my pack input and output was extremely inefficient when I mostly let the car do its thing, which was usually charging high - keeping the absolute charge state high, and keeping the voltage relatively high as a matter of routine. It didn't become more efficient until I started dropping the usage window way down, using low charge state, keeping voltages relatively low...

As I recall, it was pretty normal to, for one, leave the car over night with a pack at a high voltage, say around 168V. I wouldn't think much of using the pack at-will with voltages around 165V+, say doing braking regen. I think it was pretty rare to see resting voltages below around 156V - the BCM would usually be background charging, again as a matter of routine, at or before pack voltage got that low...

The flip side, with manual management techniques, is in effect a general lowering of the voltage window - probably much more between 150V and 160V, rarely leaving voltage above 164V max, usually probably closer to 161-163V...

So, in stock/OEM form, it's probably like 156V to 168V, manual mode is more like 150V to 163V.
Manual mode includes a lot of usage in the 144V-150V range as well, resting voltages.
OEM mode - I don't think it ever lets you use that range, that is, I don't think you ever see resting voltages in that range, the BCM would have background charged by then...
In OEM mode, you'll have pack voltage sitting above 165V a lot of the time; in manual mode, you'll only see this if you purposely charge the pack high...

[later...]
Yesterday I wrote this: "1.327V to 1.367V: for a pack of 120 cells, those values would be 159.2 to 164.0... Technically, supposedly our cells have an equilibrium voltage of 1.318V, that's a Honda-reported value..."

Since I've been using a lot of voltage ranges, it'd probably be good to 'adjust' the theoretical range found in that Huggins diagram to be more consistent with the Honda-reported equilibrium value. The Honda value is 1.318V; the Huggins range is 1.327V to 1.367V. So, a range that'd be consistent with the Honda value might be:

(1.327+1.367)/2=1.347V, 1.318V minus 1.347V= -0.029 -- adjust the range downward.

Instead of 1.327 to 1.367 it'd be 1.298V to 1.338V, or 155.8V to 160.6V for the pack.

No idea if this is even conceptually sound, but figured it'd be good to calculate such an alternative range anyway. I'm thinking different additives, such as cobalt, could be the difference between pure text book values and real-world ones. This adjusted range, though, actually looks pretty low, the voltages I see I think are usually closer to the Huggins range...
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top